CN105828068A - Method and device for carrying out occlusion detection on camera and terminal device - Google Patents
Method and device for carrying out occlusion detection on camera and terminal device Download PDFInfo
- Publication number
- CN105828068A CN105828068A CN201610298833.XA CN201610298833A CN105828068A CN 105828068 A CN105828068 A CN 105828068A CN 201610298833 A CN201610298833 A CN 201610298833A CN 105828068 A CN105828068 A CN 105828068A
- Authority
- CN
- China
- Prior art keywords
- data
- histogram
- photographic head
- camera
- feature data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 57
- 238000001514 detection method Methods 0.000 title claims abstract description 48
- 230000000052 comparative effect Effects 0.000 claims description 64
- 239000000284 extract Substances 0.000 claims description 30
- 230000006870 function Effects 0.000 description 29
- 238000013461 design Methods 0.000 description 19
- 230000009977 dual effect Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 12
- 101100204393 Arabidopsis thaliana SUMO2 gene Proteins 0.000 description 10
- 101100311460 Schizosaccharomyces pombe (strain 972 / ATCC 24843) sum2 gene Proteins 0.000 description 10
- 230000006854 communication Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 101150112492 SUM-1 gene Proteins 0.000 description 7
- 101150096255 SUMO1 gene Proteins 0.000 description 7
- 230000003287 optical effect Effects 0.000 description 4
- 238000012545 processing Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000000903 blocking effect Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000010835 comparative analysis Methods 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000013075 data extraction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 239000012634 fragment Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000005484 gravity Effects 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000009527 percussion Methods 0.000 description 1
- 238000012163 sequencing technique Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000010897 surface acoustic wave method Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
- H04N17/002—Diagnosis, testing or measuring for television systems or their details for television cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a method and device for carrying out occlusion detection on a camera and a terminal device. The camera at least comprises a first camera and a second camera. The method comprises following steps of obtaining first image data and second image data photographed by the first camera and the second camera specific to a same photographing scene at a same moment; obtaining first histogram data and second histogram data corresponding to the first image data and the second image data; extracting corresponding first feature data and second feature data from the first histogram data and the second histogram data; and detecting whether the first camera and/or the second camera are occluded or not based on the first feature data and the second feature data. According to the embodiment of the method, the device and the terminal device, occlusion of the camera can be detected in a low light scene, and the occlusion detection accuracy is improved.
Description
Technical field
The present invention relates to technical field of image processing, more particularly, to a kind of, photographic head is carried out the method for occlusion detection, device and terminal unit.
Background technology
Along with the development of information technology, terminal unit is more and more universal.In terminal unit, camera function becomes the function that user increasingly values.In order to improve the shooting effect of terminal unit, colour adds black and white dual camera camera function and has obtained increasing concern.
The ultimate principle of color/monochrome dual camera is to utilize color/monochrome dual camera module to gather the coloured image under Same Scene and black white image simultaneously, and by image composing technique, two width images is synthesized a higher-quality image.When using dual camera shooting, generally using colour imagery shot as main photographic head, black and white photographic head is as secondary photographic head, and main photographic head can gather the color information of scenery in photographed scene, and secondary photographic head can gather in photographed scene the information such as the profile of scenery, details and brightness.
Generally, main photographic head is also responsible for providing data to the preview interface of the camera of terminal unit, if it is blocked, user just can discover out in preview interface.But, the picture of secondary camera collection is not present in preview, if secondary photographic head is blocked, user does not discover, then cannot absorb black white image, and then cannot be carried out image synthesis, i.e. camera applications just cannot shoot high-quality composograph.
Summary of the invention
In view of the above problems, the present invention proposes and a kind of photographic head is carried out the method for occlusion detection, device and terminal unit so that even if terminal unit also can take high-quality image under low light environment.
First aspect, the embodiment of the present invention provides a kind of method that photographic head is carried out occlusion detection, photographic head at least includes the first photographic head and second camera, wherein, the first view data that first photographic head is absorbed can carry out image preview, and the second view data that second camera is absorbed can not carry out image preview;
Described method includes:
Obtain the first view data and the second view data that the first photographic head and second camera shoot respectively at synchronization for same photographed scene;
Obtain the first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
Fisrt feature data are extracted from the first histogram data, and, from the second histogram data, extract second feature data;
Based on fisrt feature data and second feature data, detect the first photographic head and/or whether second camera is blocked.
In a possible design, based on fisrt feature data and second feature data, detect the first photographic head and/or step that whether second camera is blocked include:
For fisrt feature data, obtain corresponding second feature data;
Fisrt feature data are compared with corresponding second feature data, it is thus achieved that comparative result;
Based on fisrt feature data and/or second feature data and/or comparative result, detect the first photographic head and/or whether second camera is blocked.
In a possible design, based on fisrt feature data and/or second feature data and/or comparative result, detect the first photographic head and/or step that whether second camera is blocked include:
Fisrt feature data and/or second feature data and/or comparative result are carried out the first logical operations, obtains the first operation result;
If the first operation result is true, then judge that in the first photographic head and second camera, at least one photographic head is blocked;
Fisrt feature data and/or second feature data and/or comparative result being carried out the second logical operations, obtains the second operation result, wherein, the second logical operations and the first logical operations also differ;
If the second operation result is true, then judge that the first photographic head is blocked;
If the second operation result is false, then judge that whether fisrt feature data and/or second feature data are more than predetermined threshold value;
The most then judge that second camera is blocked;
If it is not, then judge that the first photographic head and second camera are blocked.
In a possible design, method also includes:
If the first operation result is false, then judge that the first photographic head and second camera are not the most blocked.
In a possible design, fisrt feature data at least include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at the first histogram peak place;
Second feature data at least include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at the second histogram peak place;
Wherein, fisrt feature number of grey levels or second feature number of grey levels be number of pixels be not total number of the gray level of 0.
In a possible design, fisrt feature data are compared with corresponding second feature data, it is thus achieved that the step of comparative result includes:
Calculate the difference of the first average and the second average, obtain the first difference;
Calculate the difference of the first standard deviation and the second standard deviation, obtain the second difference;
Calculate the difference of fisrt feature number of grey levels and second feature number of grey levels, obtain the 3rd difference;
Calculate the difference of second feature number of grey levels and fisrt feature number of grey levels, obtain the 4th difference;
Calculate the difference of the gray level at the first histogram peak place and the gray level at the second histogram peak place, obtain the 5th difference;
Calculate the difference of the gray level at the second histogram peak place and the gray level at the first histogram peak place, obtain the 6th difference;
With second feature number of grey levels, fisrt feature number of grey levels is carried out size compare, obtain the first fiducial value;
With the gray level at the second histogram peak place, the gray level at the first histogram peak place is carried out size compare, obtain the second fiducial value.
In a possible design, fisrt feature data and/or second feature data and/or comparative result being carried out the first logical operations, the step obtaining the first operation result includes:
Based on the first difference and/or the second difference and/or the 3rd difference and/or the 4th difference and/or the 5th difference and/or the 6th difference and/or the first fiducial value and/or the second fiducial value, carry out logical operations in conjunction with the gray level at fisrt feature number of grey levels and/or second feature number of grey levels and/or the gray level at the first histogram peak place and/or the second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the first operation result.
In a possible design, fisrt feature data and/or second feature data and/or comparative result being carried out the second logical operations, the step obtaining the second operation result includes:
Based on the first fiducial value and/or the second fiducial value, carry out logical operations in conjunction with the gray level at fisrt feature number of grey levels and/or second feature number of grey levels and/or the gray level at the first histogram peak place and/or the second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the second operation result.
In a possible design, from the first histogram data, extracting fisrt feature data, and, before extracting the step of second feature data from the second histogram data, also include:
First histogram data and the second histogram data are normalized.
In a possible design, method also includes:
When detecting that the first photographic head and/or second camera are blocked, generate and block prompting;
Preview interface represents and blocks prompting.
Second aspect, embodiments provide a kind of device that photographic head is carried out occlusion detection, photographic head at least includes the first photographic head and second camera, wherein, the first view data that first photographic head is absorbed can carry out image preview, and the second view data that second camera is absorbed can not carry out image preview;
This device that photographic head carries out occlusion detection has the function realizing that photographic head carries out in above-mentioned first aspect the behavior of occlusion detection.Function can be realized by hardware, it is also possible to performs corresponding software by hardware and realizes.Hardware or software include one or more module corresponding with above-mentioned functions.
This device that photographic head carries out occlusion detection can include such as lower module:
Image taking module, is suitable to obtain the first view data and the second view data that the first photographic head and second camera shoot respectively at synchronization for same photographed scene;
Histogram data acquisition module, is suitable to obtain respectively the first view data and the first histogram data corresponding to the second view data and the second histogram data;
Characteristic extracting module, is suitable to extract fisrt feature data from the first histogram data, and, from the second histogram data, extract second feature data;
Occlusion detection module, is suitable to based on fisrt feature data and second feature data, detects the first photographic head and/or whether second camera is blocked.
In a possible design, occlusion detection module is further adapted for:
For fisrt feature data, obtain corresponding second feature data;
Fisrt feature data are compared with corresponding second feature data, it is thus achieved that comparative result;
Based on fisrt feature data and/or second feature data and/or comparative result, detect the first photographic head and/or whether second camera is blocked.
In a possible design, occlusion detection module is further adapted for:
Fisrt feature data and/or second feature data and/or comparative result are carried out the first logical operations, obtains the first operation result;
If the first operation result is true, then judge that in the first photographic head and second camera, at least one photographic head is blocked;
Fisrt feature data and/or second feature data and/or comparative result being carried out the second logical operations, obtains the second operation result, wherein, the second logical operations and the first logical operations also differ;
If the second operation result is true, then judge that the first photographic head is blocked;
If the second operation result is false, then judge that whether fisrt feature data and/or second feature data are more than predetermined threshold value;
The most then judge that second camera is blocked;
If it is not, then judge that the first photographic head and second camera are blocked.
In a possible design, this device also includes:
Blocking determination module, being suitable at the first operation result is fictitious time, then judge that the first photographic head and second camera are not the most blocked.
In a possible design, fisrt feature data at least include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at the first histogram peak place;Second feature data at least include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at the second histogram peak place;Wherein, fisrt feature number of grey levels or second feature number of grey levels be number of pixels be not total number of the gray level of 0.
In a possible design, occlusion detection module is further adapted for:
Calculate the difference of the first average and the second average, obtain the first difference;
Calculate the difference of the first standard deviation and the second standard deviation, obtain the second difference;
Calculate the difference of fisrt feature number of grey levels and second feature number of grey levels, obtain the 3rd difference;
Calculate the difference of second feature number of grey levels and fisrt feature number of grey levels, obtain the 4th difference;
Calculate the difference of the gray level at the first histogram peak place and the gray level at the second histogram peak place, obtain the 5th difference;
Calculate the difference of the gray level at the second histogram peak place and the gray level at the first histogram peak place, obtain the 6th difference;
With second feature number of grey levels, fisrt feature number of grey levels is carried out size compare, obtain the first fiducial value;
With the gray level at the second histogram peak place, the gray level at the first histogram peak place is carried out size compare, obtain the second fiducial value.
In a possible design, occlusion detection module is further adapted for:
Based on the first difference and/or the second difference and/or the 3rd difference and/or the 4th difference and/or the 5th difference and/or the 6th difference and/or the first fiducial value and/or the second fiducial value, carry out logical operations in conjunction with the gray level at fisrt feature number of grey levels and/or second feature number of grey levels and/or the gray level at the first histogram peak place and/or the second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the first operation result.
In a possible design, occlusion detection module is further adapted for:
Based on the first fiducial value and/or the second fiducial value, carry out logical operations in conjunction with the gray level at fisrt feature number of grey levels and/or second feature number of grey levels and/or the gray level at the first histogram peak place and/or the second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the second operation result.
In a possible design, this device also includes:
Normalized module, is suitable to be normalized the first histogram data and the second histogram data.
In a possible design, this device also includes:
Reminding module, is suitable to when detecting that the first photographic head and/or second camera are blocked, and generates and blocks prompting, and represents in preview interface and block prompting.
In a possible design, this device that photographic head carries out occlusion detection includes photographic head, memorizer and processor;
Described memorizer is for storing the program supporting that R-T unit performs said method, and described processor is configurable for the program of storage in described memorizer that performs.The described device that photographic head carries out occlusion detection can also include communication interface, device photographic head carrying out occlusion detection for this and other equipment or communication.
Wherein, photographic head at least includes the first photographic head and second camera, and wherein, the first view data that the first photographic head is absorbed can carry out image preview, and the second view data that second camera is absorbed can not carry out image preview;
Memorizer obtains, for storing, the first view data and the instruction of the second view data that the first photographic head and second camera shoot respectively at synchronization for same photographed scene;Obtain the first view data and the first histogram data corresponding to the second view data and the instruction of the second histogram data;Fisrt feature data are extracted from the first histogram data, and, from the second histogram data, extract the instruction of second feature data;And, based on fisrt feature data and second feature data, detect the first photographic head and/or instruction that whether second camera is blocked;
Processor is used for:
According to obtaining the first view data and the instruction of the second view data that the first photographic head and second camera shoot respectively at synchronization for same photographed scene, obtain the first view data and the second view data that the first photographic head and second camera shoot respectively at synchronization for same photographed scene;
According to obtaining the first view data and the first histogram data corresponding to the second view data and the instruction of the second histogram data, obtain the first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
According to extracting fisrt feature data from the first histogram data, and, from the second histogram data, extract the instruction of second feature data, from the first histogram data, extract fisrt feature data, and, from the second histogram data, extract second feature data;
According to based on fisrt feature data and second feature data, detect the first photographic head and/or instruction that whether second camera is blocked, based on fisrt feature data and second feature data, detect the first photographic head and/or whether second camera is blocked.
The third aspect, embodiments provides a kind of computer-readable storage medium, and for saving as the computer software instructions used by the above-mentioned device that photographic head carries out occlusion detection, it comprises for performing above-mentioned aspect is the program designed by xx device.
Relative to prior art, the scheme that the present invention provides can extract characteristic respectively according to two histogram datas that dual camera obtains, and judge whether photographic head is blocked according to the comparative analysis of characteristic, even if making also to detect that in low-light scene photographic head is blocked, improve the accuracy of occlusion detection.
The aspects of the invention or other aspects be meeting more straightforward in the following description.
Accompanying drawing explanation
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, in describing embodiment below, the required accompanying drawing used is briefly described, apparently, accompanying drawing in describing below is only some embodiments of the present invention, for those skilled in the art, on the premise of not paying creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 shows the flow chart of steps of a kind of embodiment of the method that photographic head carries out occlusion detection.
Fig. 2 shows the flow chart of steps of a kind of embodiment of the method that photographic head carries out occlusion detection.
Fig. 3 shows the structured flowchart of a kind of device embodiment that photographic head carries out occlusion detection.
Fig. 4 shows the structured flowchart of a kind of terminal unit embodiment.
Detailed description of the invention
In order to make those skilled in the art be more fully understood that the present invention program, below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described.
In some flow processs of description in description and claims of this specification and above-mentioned accompanying drawing, contain the multiple operations occurred according to particular order, but it should be clearly understood that, these operations can not perform or executed in parallel according to its order occurred in this article, the sequence number such as 101,102 etc. of operation, being only used for distinguishing each different operation, sequence number itself does not represent any execution sequence.It addition, these flow processs can include more or less of operation, and these operations can perform or executed in parallel in order.It should be noted that " first ", " second " herein etc. describe, it is that not limiting " first " and " second " is different types for distinguishing different message, equipment, module etc., not representing sequencing.
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments.Based on the embodiment in the present invention, the every other embodiment that those skilled in the art are obtained under not making creative work premise, broadly fall into the scope of protection of the invention.
The embodiment of the present invention can apply to have in the terminal unit of photographic head, and this photographic head is taken pictures for realization, camera function.
Further, this terminal unit can also include display screen, and this display screen is for realizing the preview function to shooting picture, i.e. is shown in real time by the picture of absorption current to photographic head, for user's preview, thus reaches the effect of view finder.
This terminal unit can include mobile phone, panel computer, digital camera, personal digital assistant, wearable device (such as glasses, wrist-watch etc.) etc., and the operating system of this terminal unit can include Android (Android), IOS, WindowsPhone, Windows etc..This is not construed as limiting by the present invention.
In embodiments of the present invention, photographic head in terminal unit can be dual camera, this dual camera can include two independent photographic head modules being installed on same circuit board, each photographic head module include a lens unit, one as driver and carry the corresponding voice coil motor of lens unit, a circuit board for transmitting signal shared for the imageing sensor and two camera modules that gather taken image information.The critical piece of each voice coil motor is coil, magnet and shell fragment, in the case of energising, coil produces electric current, thus generate the magnetic field of some strength, under the comprehensive function of the magnetic field that magnet produces and coil current Induced magnetic field, hot-wire coil is by Ampere force, the direction of power is along optical axis direction, thus change the distance between camera lens and imageing sensor, and then realize focusing function, shoot image.Certainly, the dual camera of the embodiment of the present invention is not limited to the mode of above-mentioned shooting image, it is also possible to include other styles of shooting, and this is not construed as limiting by the embodiment of the present invention.
In embodiments of the present invention, first photographic head is for shooting the first view data, and second camera is for shooting the second view data, wherein, this first view data can carry out image preview in preview interface, and this second view data can not carry out image preview in preview interface.
With reference to Fig. 1, show the flow chart of steps of a kind of embodiment of the method that photographic head is carried out occlusion detection, wherein, described photographic head at least includes the first photographic head and second camera, wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview.
The embodiment of the present invention specifically may include steps of:
Step 101, obtains the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
Step 102, obtains described first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
Step 103, extracts fisrt feature data from described first histogram data, and, from described second histogram data, extract second feature data;
Step 104, based on described fisrt feature data and described second feature data, detects described first photographic head and/or whether second camera is blocked.
In embodiments of the present invention, characteristic can be extracted respectively according to two histogram datas that dual camera obtains, and judge whether photographic head is blocked according to the comparative analysis of characteristic, even if making also to detect that in low-light scene photographic head is blocked, improve the accuracy of occlusion detection.
With reference to Fig. 2, it is shown that the flow chart of steps of a kind of embodiment of the method that photographic head is carried out occlusion detection, specifically may include steps of:
Step 201, obtains the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
In practice, the camera application program of terminal unit has the functional keys that single camera mutually switches with dual camera, when the camera application program of user's opening terminal apparatus, initially enter the screening-mode (i.e. using the first photographic head to carry out the common screening-mode shot) of single camera camera, if user clicks on the functional keys that this single camera mutually switches with dual camera, then can be switched to dual camera screening-mode, open the first photographic head and second camera simultaneously, use the first photographic head to carry out image preview.
In one implementation, first photographic head can absorb the first view data in the following way: after the first photographic head is opened, the optical imagery that same photographed scene (SCENE) is generated by the camera lens (Lens) of the first photographic head projects on image sensing processor (Sensor) surface, then turn to the signal of telecommunication, data image signal is become after A/D (analog digital conversion) changes, by data image signal being compressed by digital signal processing chip (DSP) or code database and being converted into specific image file format, by the processor (CentralProcessingUnit of data bus transmission to terminal unit, CPU) process, can be obtained by the first view data of correspondence.
The acquisition mode of the second view data is similar with the acquisition mode of above-mentioned first view data, is specifically referred to the acquisition mode of the first view data, and this is repeated no more by the embodiment of the present invention.
Step 202, obtains described first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
After obtaining the first view data and the second view data, can further according to the first histogram data that this first image data acquisition is corresponding, and, according to the second histogram data that this second image data acquisition is corresponding.
First histogram data or the second histogram data are the data of image histogram, and image histogram is the statistical table of one image pixel distribution of reflection, and its abscissa represents the kind of image pixel, can be gray scale, RGB color, brightness etc..Vertical coordinate represents each kind sum of all pixels in the picture or accounts for the percentage ratio of all sum of all pixels.
In a preferred embodiment, image histogram indicated by this first histogram data or the second histogram data can be grey level histogram or brightness histogram, it is the figure of relation between the gray level reflected in a two field picture and the probability of the pixel this gray level occur, the abscissa of this figure is gray level, and vertical coordinate is to have the number of pixels of this gray level or the probability of this gray level occur.
In one embodiment, it is assumed that n (=a*b) is that (L is gray level number of levels to a frame image data f, s for x, sum of all pixels y)kFor f (x, the gray value of kth level y), nkFor f (x, y) in there is gray value skPixel count;The histogram data then extracted from this view data is:
p(sk)=nk/ n, k=0,1 ..., L-1 (1)
According to above formula (1), can be with skFor abscissa, with p (sk) it is vertical coordinate, rectangular coordinate system is drawn p (sk) functional arrangement, as image histogram.
According to the mode of above-mentioned acquisition histogram data, the first histogram data p can be obtained respectively1(sk) and the second histogram data p2(sk)。
It should be noted that, the mode of above-mentioned acquisition the first histogram data or the second histogram data is only a kind of example of the embodiment of the present invention, those skilled in the art adopt and obtain the first histogram data in other ways or the second histogram data is all possible, and this is not restricted by the embodiment of the present invention.
Step 203, is normalized described first histogram data and described second histogram data;
The data magnitude deviation caused owing to parameter is different from second camera in order to avoid the first photographic head, the first histogram data and the second histogram data can also be normalized by the embodiment of the present invention.
Normalization is a kind of mode simplifying calculating, will have the expression formula of dimension, through conversion, turns to nondimensional expression formula, becomes scalar.
In one implementation, can in the following manner the first histogram data be normalized: obtain p in the first histogram data1(sk) maxima and minima, by the p of each gray level1(sk) divided by (maximum-minima), thus complete the normalized to the first histogram data.
First histogram data is entered by the mode being normalized the second histogram data with above-mentioned
The mode of row normalized is similar to, and is specifically referred to the above-mentioned mode being normalized the first histogram data.
Should be understood that, the mode of above-mentioned normalized is only a kind of example of the embodiment of the present invention, those skilled in the art can also adopt and be normalized the first histogram data and the second histogram data in other ways, and this is not restricted by the embodiment of the present invention.
Step 204, extracts fisrt feature data the first histogram data after normalization, and, the second histogram data after normalization extracts second feature data;
After first histogram data and the second histogram data are normalized, respectively the histogram data after normalized feature extraction be can be carried out, fisrt feature data and the second feature data of correspondence obtained.
As a kind of preferred exemplary of the embodiment of the present invention, fisrt feature data at least can include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at described first histogram peak place.Wherein, described fisrt feature number of grey levels be the number of pixel be not total number of the gray level of 0.
As a kind of preferred exemplary of the embodiment of the present invention, described second feature data at least can include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at described second histogram peak place;Wherein, described second feature number of grey levels be the number of pixel be not total number of the gray level of 0.
Wherein, average refers generally to average, and average is the amount number representing trend in a group data set, refers to that all data sums are again divided by the number of these group data in one group of data.It is an index of reflection data central tendency.
In a kind of preferred embodiment of the embodiment of the present invention, can use equation below (2) calculate histogram data mean μ:
Wherein, xiRepresent the number of pixels of i-stage gray level.
According to formula (2), the first mean μ can be calculated based on the first histogram data1;The second mean μ is calculated based on the second histogram data2。
Standard deviation (StandardDeviation) is most-often used as the measurement in statistical distribution degree (statisticaldispersion) in probability statistics.Standard deviation definition is overall constituent parts standard value and the square root of the arithmetical average of its average deviation square, dispersion degree between individuality in its reflection group.
In a kind of preferred embodiment of the embodiment of the present invention, can use equation below (3) calculate histogram data standard deviation sigma:
According to formula (3), can be based on the first histogram data and first mean value computation the first standard deviation sigma1;Based on the second histogram data and second mean value computation the second standard deviation sigma2。
Fisrt feature number of grey levels or second feature number of grey levels are that in the first histogram data or the second histogram data, the number of pixel is not the total quantity of the gray level of 0.In a kind of preferred embodiment of the embodiment of the present invention, fisrt feature number of grey levels can be calculated in the following way:
For the first histogram data, it is thus achieved that the pixel count of each gray level, removing pixel count is the gray level of 0, and the quantity of the gray level obtained is gray level gray level gray level gray level fisrt feature number of grey levels SUM1.
For the second histogram data, it is thus achieved that the pixel count of each gray level, removing pixel count is the gray level of 0, and the quantity of the gray level obtained is second feature number of grey levels SUM2.
Peak value refers to the maximum or maximum quantity occurred in things development.In a kind of preferred embodiment of the embodiment of the present invention, histogrammic peak value can be calculated in the following way:
The method using traversal, to each gray level corresponding pixel count compare one by one, retain big pixel count to continue to compare with the pixel count of next gray level, until the pixel count of whole gray levels is more complete, the numerical value obtained is exactly the maximum of the pixel count in rectangular histogram, the most histogrammic peak value.
According to said method, the first histogram peak M1 can be calculated based on the first histogram data;The second histogram peak M2 is calculated based on the second histogram data.
After obtaining the first histogram peak or the second histogram peak, it is possible to obtain gray level LM1 or gray level LM2 at the second histogram peak place at the first histogram peak place.
It should be noted that, the mode of the statistics such as above-mentioned calculating average, standard deviation, fisrt feature number of grey levels, second feature number of grey levels, histogram peak is only a kind of example of the embodiment of the present invention, those skilled in the art can also extract other characteristics from rectangular histogram and adopt and extract characteristic in other ways, and this is not construed as limiting by the embodiment of the present invention.
Step 205, based on described fisrt feature data and described second feature data, detects described first photographic head and/or whether second camera is blocked;
Due to the reflection of above-mentioned fisrt feature data based on the first image data extraction and second feature data based on the second image data extraction is the synchronization data for same photographed scene, in embodiments of the present invention, these fisrt feature data and second feature data can be compared analysis, find out the corresponding relation between them, and by corresponding relation and multiple corresponding predetermined threshold value contrasts, determine according to comparing result and whether dual camera has a camera lens be in the state of being blocked.
In a kind of preferred embodiment of the embodiment of the present invention, step 205 may include steps of further:
Sub-step S11, for described fisrt feature data, obtains corresponding second feature data;
At synchronization for same photographed scene, each fisrt feature data all have the second feature data of correspondence.
Such as, for above-mentioned fisrt feature data and second feature data, μ1With μ2Corresponding;σ1With σ2Corresponding;SUM1 with SUM2 is corresponding;M1 with M2 is corresponding;LM1 with LM2 is corresponding.
Described fisrt feature data are compared, it is thus achieved that comparative result by sub-step S12 with described corresponding second feature data;
After obtaining the second feature data of fisrt feature data and correspondence thereof, both can be compared, obtain comparative result.
In one embodiment, following comparative result can be obtained:
(1) calculate the difference of described first average and described second average, obtain the first difference;
I.e. the first difference=μ1-μ2。
(2) calculate the difference of described first standard deviation and described second standard deviation, obtain the second difference;
I.e. the second difference=σ1-σ2。
(3) calculate the difference of described fisrt feature number of grey levels and described second feature number of grey levels, obtain the 3rd difference;
I.e. three differences=SUM1-SUM2.
(4) calculate the difference of described second feature number of grey levels and described fisrt feature number of grey levels, obtain the 4th difference;
I.e. four differences=SUM2-SUM1.
(5) calculate the difference of the gray level at described first histogram peak place and the gray level at described second histogram peak place, obtain the 5th difference;
I.e. five differences=LM1-LM2.
(6) calculate the difference of the gray level at described second histogram peak place and the gray level at described first histogram peak place, obtain the 6th difference;
I.e. six differences=LM2-LM1.
(7) with described second feature number of grey levels, described fisrt feature number of grey levels is carried out size to compare, obtain the first fiducial value;
I.e. first fiducial value is SUM1 > SUM2.
(8) with the gray level at described second histogram peak place, the gray level at described first histogram peak place is carried out size to compare, obtain the second fiducial value.
I.e. second fiducial value is LM1 > LM2.
Certainly, except above-mentioned comparative result, it is also possible to have other comparative results, such as μ1>μ2, etc., this is not restricted by the embodiment of the present invention.
Sub-step S13, based on described fisrt feature data and/or described second feature data and/or described comparative result, detects described first photographic head and/or whether second camera is blocked.
After obtaining comparative result, comparative result can be combined fisrt feature data and/or second feature data, detect the first photographic head and/or whether second camera is blocked.
In a kind of preferred embodiment of the embodiment of the present invention, sub-step S13 can include following sub-step further:
Described fisrt feature data and/or described second feature data and/or described comparative result are carried out the first logical operations, obtain the first operation result by sub-step S131;
In implementing, fisrt feature data and/or second feature data and/or comparative result can be carried out NAND Logic computing, obtain the first operation result.
In one embodiment, sub-step S131 can be further: based on described first difference and/or described second difference and/or described 3rd difference and/or described 4th difference and/or described 5th difference and/or described 6th difference and/or described first fiducial value and/or described second fiducial value, logical operations is carried out with the comparative result of corresponding multiple predetermined threshold value in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place, obtain the first operation result.
As a kind of example, (4) the first operation result can be obtained according to the following equation:
(| u1-u2 | > 0.4&& | σ 1-σ 2 | > 0.3) | |
(LM1<4 | | LM2<4) && | LM1-LM2 |>=40 | |
(SUM2 < 10&& (SUM1-SUM2) >=15) | |
(SUM1≤35&& (SUM2-SUM1) >=100) | |
(SUM1 < 65&&SUM2≤65) && | SUM2-SUM1 | >=30) | |
(| SUM1-SUM2 | >=30) | |
(LM1 > LM2&&SUM1-SUM2 > 30) | |
(LM1 < LM2&&SUM1-SUM2 > 88) | |
(SUM1 < 65&&SUM2≤65) && (SUM2-SUM1) >=25) &&
(LM2-LM1) >=40) | |
(SUM2≤35&& (SUM1-SUM2) >=19) | |
(SUM1≤29&& (SUM2-SUM1) >=12) | |
(LM1 > 30&&LM2 < 30&& | LM1-LM2 | >=18) | |
(LM2 > 250&& | LM1-LM2 | >=55 | | | | SUM1-SUM2 | >=50) | |
(LM1 > 70&&LM1-LM2 >=40&&SUM1-SUM2 >=40) (4)
Sub-step S132, if described first operation result is true, then judges that in described first photographic head and described second camera, at least one photographic head is blocked;
When the first operation result is true time, it is possible to determine that in the first photographic head and second camera, at least one photographic head is blocked, at this point it is possible to continue executing with sub-step S133 to sub-step S135, determine specifically which photographic head is blocked.
Otherwise, if described first operation result is false, then judges that the first photographic head and described second camera are not the most blocked, flow process can be shot normally, after detecting that user presses shutter, record image.
Described fisrt feature data and/or described second feature data and/or described comparative result are carried out the second logical operations, obtain the second operation result by sub-step S133;
In implementing, when the first operation result is true time, further fisrt feature data and/or second feature data and/or comparative result can be carried out NAND Logic computing, obtain the second operation result.
In one embodiment, sub-step S133 can be further: based on described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the second operation result.
As a kind of example, (5) the second operation result can be obtained according to the following equation:
It should be noted that in above-mentioned formula (4) and formula (5), all of threshold value is empirical value, the concrete setting of threshold value is not restricted by the embodiment of the present invention.Further, above-mentioned formula (4) and formula (5) are a kind of exemplary illustration of the embodiment of the present invention, and this area uses other formula to carry out judging all to be possible, and this is not construed as limiting by the embodiment of the present invention.
Sub-step S134, if described second operation result is true, then judges that described first photographic head is blocked;
Sub-step S135, if described second operation result is false, then judges that whether described fisrt feature data and/or described second feature data are more than predetermined threshold value;The most then judge that described second camera is blocked;If it is not, then judge that described first photographic head and described second camera are blocked.
In implementing, if the second operation result is true, then can be determined that it is that the first photographic head is blocked.Otherwise, if the second operation result is false, then determine whether that fisrt feature data and/or described second feature data whether more than predetermined threshold value, such as, it is judged that SUM2 < 70, if SUM2 is less than 70, then judge that second camera is blocked;If SUM2 is more than or equal to 70, then judge that the first photographic head and second camera are all blocked.
Step 206, when detecting that described first photographic head and/or second camera are blocked, generates and blocks prompting, and blocks prompting described in representing in preview interface.
Be applied to the embodiment of the present invention, when the first photographic head and/or second camera are blocked when detecting, can generate and block prompting, and by described block prompting show user in preview interface, to point out user's photographic head to be blocked.
In embodiments of the present invention, characteristic can be extracted according to the histogram data of image, and judge whether photographic head is blocked according to characteristic, even if making also to detect that in low-light scene photographic head is blocked, and when detecting that photographic head is blocked, issue the user with prompting in time, to facilitate user to shoot high quality graphic.
It addition, the embodiment of the present invention can strengthen the practicality of dual camera, it is effectively improved Consumer's Experience.
With reference to Fig. 3, show the structured flowchart of a kind of device embodiment that photographic head is carried out occlusion detection, described photographic head at least includes the first photographic head and second camera, wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview.
The device of the embodiment of the present invention can include such as lower module:
Image taking module 301, is suitable to obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
Histogram data acquisition module 302, is suitable to obtain respectively described first view data and the first histogram data corresponding to the second view data and the second histogram data;
Characteristic extracting module 303, is suitable to extract fisrt feature data from described first histogram data, and, from described second histogram data, extract second feature data;
Occlusion detection module 304, is suitable to, based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
In a kind of preferred embodiment of the embodiment of the present invention, described occlusion detection module 304 is further adapted for:
For described fisrt feature data, obtain corresponding second feature data;
Described fisrt feature data are compared with described corresponding second feature data, it is thus achieved that comparative result;
Based on described fisrt feature data and/or described second feature data and/or described comparative result, detect described first photographic head and/or whether second camera is blocked.
In a kind of preferred embodiment of the embodiment of the present invention, described occlusion detection module 304 is further adapted for:
Described fisrt feature data and/or described second feature data and/or described comparative result are carried out the first logical operations, obtains the first operation result;
If described first operation result is true, then judge that in described first photographic head and described second camera, at least one photographic head is blocked;
Described fisrt feature data and/or described second feature data and/or described comparative result being carried out the second logical operations, obtains the second operation result, wherein, described second logical operations and described first logical operations also differ;
If described second operation result is true, then judge that described first photographic head is blocked;
If described second operation result is false, then judge that whether described fisrt feature data and/or described second feature data are more than predetermined threshold value;
The most then judge that described second camera is blocked;
If it is not, then judge that described first photographic head and described second camera are blocked.
In a kind of preferred embodiment of the embodiment of the present invention, described device can also include such as lower module:
Blocking determination module, being suitable at described first operation result is fictitious time, then judge that described first photographic head and described second camera are not the most blocked.
In a kind of preferred embodiment of the embodiment of the present invention, described fisrt feature data at least include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at described first histogram peak place;Described second feature data at least include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at described second histogram peak place;Wherein, described fisrt feature number of grey levels or described second feature number of grey levels be number of pixels be not total number of the gray level of 0.
In a kind of preferred embodiment of the embodiment of the present invention, described occlusion detection module 304 is further adapted for:
Calculate the difference of described first average and described second average, obtain the first difference;
Calculate the difference of described first standard deviation and described second standard deviation, obtain the second difference;
Calculate the difference of described fisrt feature number of grey levels and described second feature number of grey levels, obtain the 3rd difference;
Calculate the difference of described second feature number of grey levels and described fisrt feature number of grey levels, obtain the 4th difference;
Calculate the difference of the gray level at described first histogram peak place and the gray level at described second histogram peak place, obtain the 5th difference;
Calculate the difference of the gray level at described second histogram peak place and the gray level at described first histogram peak place, obtain the 6th difference;
With described second feature number of grey levels, described fisrt feature number of grey levels is carried out size compare, obtain the first fiducial value;
With the gray level at described second histogram peak place, the gray level at described first histogram peak place is carried out size compare, obtain the second fiducial value.
In a kind of preferred embodiment of the embodiment of the present invention, described occlusion detection module 304 is further adapted for:
Based on described first difference and/or described second difference and/or described 3rd difference and/or described 4th difference and/or described 5th difference and/or described 6th difference and/or described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the first operation result.
In a kind of preferred embodiment of the embodiment of the present invention, described occlusion detection module 304 is further adapted for:
Based on described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the second operation result.
In a kind of preferred embodiment of the embodiment of the present invention, described device can also include such as lower module:
Normalized module, is suitable to be normalized described first histogram data and described second histogram data.
In a kind of preferred embodiment of the embodiment of the present invention, described device can also include such as lower module:
Reminding module, is suitable to when detecting that described first photographic head and/or second camera are blocked, and generates and blocks prompting, and blocks prompting described in representing in preview interface.
The embodiment of the present invention additionally provides a kind of terminal unit, as shown in Figure 4, for convenience of description, illustrate only the part relevant to the embodiment of the present invention, and concrete ins and outs do not disclose, and refer to embodiment of the present invention method part.This terminal can be to include mobile phone, panel computer, PDA (PersonalDigitalAssistant, personal digital assistant), POS (PointofSales, point-of-sale terminal), the arbitrarily terminal unit such as vehicle-mounted computer, as a example by terminal unit is as mobile phone:
Fig. 4 is illustrated that the block diagram of the part-structure of the mobile phone relevant to the terminal unit of embodiment of the present invention offer.With reference to Fig. 4, mobile phone includes: radio frequency (RadioFrequency, RF) parts such as circuit 410, memorizer 420, input block 430, display unit 440, sensor 450, voicefrequency circuit 460, Wireless Fidelity (wirelessfidelity, WiFi) module 470, processor 480, power supply 490 and photographic head 411.It will be understood by those skilled in the art that the handset structure shown in Fig. 4 is not intended that the restriction to mobile phone, can include that ratio illustrates more or less of parts, or combine some parts, or different parts are arranged.
Below in conjunction with Fig. 4 each component parts of mobile phone carried out concrete introduction:
RF circuit 410 can be used for receiving and sending messages or in communication process, the reception of signal and transmission, especially, after being received by the downlink information of base station, processes to processor 480;It addition, be sent to base station by designing up data.Generally, RF circuit 410 includes but not limited to antenna, at least one amplifier, transceiver, bonder, low-noise amplifier (LowNoiseAmplifier, LNA), duplexer etc..Additionally, RF circuit 410 can also be communicated with network and other equipment by radio communication.nullAbove-mentioned radio communication can use arbitrary communication standard or agreement,Include but not limited to global system for mobile communications (GlobalSystemofMobilecommunication,GSM)、General packet radio service (GeneralPacketRadioService,GPRS)、CDMA (CodeDivisionMultipleAccess,CDMA)、WCDMA (WidebandCodeDivisionMultipleAccess,WCDMA)、Long Term Evolution (LongTermEvolution,LTE)、Email、Short Message Service (ShortMessagingService,SMS) etc..
Memorizer 420 can be used for storing software program and module, and processor 480 is stored in software program and the module of memorizer 420 by operation, thus performs the application of various functions and the data process of mobile phone.Memorizer 420 can mainly include storing program area and storage data field, and wherein, storage program area can store the application program (such as sound-playing function, image player function etc.) etc. needed for operating system, at least one function;Storage data field can store the data (such as voice data, phone directory etc.) etc. that the use according to mobile phone is created.Additionally, memorizer 420 can include high-speed random access memory, it is also possible to include nonvolatile memory, for example, at least one disk memory, flush memory device or other volatile solid-state parts.
Input block 430 can be used for receiving numeral or the character information of input, and produces the key signals input relevant with the user setup of mobile phone and function control.Specifically, input block 430 can include contact panel 431 and other input equipments 432.Contact panel 431, also referred to as touch screen, user can be collected thereon or neighbouring touch operation (such as user uses any applicable object such as finger, stylus or adnexa operation on contact panel 431 or near contact panel 431), and drive corresponding attachment means according to formula set in advance.Optionally, contact panel 431 can include touch detecting apparatus and two parts of touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect the signal that touch operation brings, transmit a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 480, and can receive order that processor 480 sends and be performed.Furthermore, it is possible to use the polytypes such as resistance-type, condenser type, infrared ray and surface acoustic wave to realize contact panel 431.Except contact panel 431, input block 430 can also include other input equipments 432.Specifically, one or more during other input equipments 432 can include but not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Display unit 440 can be used for showing the information inputted by user or the information being supplied to user and the various menus of mobile phone.Display unit 440 can include display floater 441, optionally, the form such as liquid crystal display (LiquidCrystalDisplay, LCD), Organic Light Emitting Diode (OrganicLight-EmittingDiode, OLED) can be used to configure display floater 441.Further, contact panel 431 can cover display floater 441, when contact panel 431 detects thereon or after neighbouring touch operation, send processor 480 to determine the type of touch event, on display floater 441, provide corresponding visual output with preprocessor 480 according to the type of touch event.Although in the diagram, contact panel 431 and display floater 441 are to realize input and the input function of mobile phone as two independent parts, but in some embodiments it is possible to by integrated to contact panel 431 and display floater 441 and realize input and the output function of mobile phone.
Mobile phone may also include at least one sensor 450, such as optical sensor, motion sensor and other sensors.Specifically, optical sensor can include ambient light sensor and proximity transducer, and wherein, ambient light sensor can regulate the brightness of display floater 441 according to the light and shade of ambient light, proximity transducer can cut out display floater 441 and/or backlight when mobile phone moves in one's ear.One as motion sensor, accelerometer sensor can detect the size of (generally three axles) acceleration in all directions, can detect that size and the direction of gravity time static, can be used for identifying the application (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating) of mobile phone attitude, Vibration identification correlation function (such as pedometer, percussion) etc.;Other sensors such as the gyroscope that can also configure as mobile phone, barometer, drimeter, thermometer, infrared ray sensor, do not repeat them here.
Voicefrequency circuit 460, speaker 461, microphone 462 can provide the audio interface between user and mobile phone.The signal of telecommunication after the voice data conversion that voicefrequency circuit 460 can will receive, is transferred to speaker 461, speaker 461 is converted to acoustical signal output;On the other hand, the acoustical signal of collection is converted to the signal of telecommunication by microphone 462, voice data is converted to after being received by voicefrequency circuit 460, after again voice data output processor 480 being processed, through RF circuit 410 to be sent to such as another mobile phone, or voice data is exported to memorizer 420 to process further.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user to send and receive e-mail by WiFi module 470, browse webpage and access streaming video etc., and it has provided the user wireless broadband internet and has accessed.Although Fig. 4 shows WiFi module 470, but it is understood that, it is also not belonging to must be configured into of mobile phone, can omit in not changing the scope of essence of invention as required completely.
Processor 480 is the control centre of mobile phone, utilize various interface and the various piece of the whole mobile phone of connection, it is stored in the software program in memorizer 420 and/or module by running or performing, and call the data being stored in memorizer 420, perform the various functions of mobile phone and process data, thus mobile phone is carried out integral monitoring.Optionally, processor 480 can include one or more processing unit;Preferably, processor 480 can integrated application processor and modem processor, wherein, application processor mainly processes operating system, user interface and application program etc., and modem processor mainly processes radio communication.It is understood that above-mentioned modem processor can not also be integrated in processor 480.
Mobile phone also includes the power supply 490 (such as battery) powered to all parts, preferably, power supply can be logically contiguous with processor 480 by power-supply management system, thus realized the functions such as management charging, electric discharge and power managed by power-supply management system.
Although it is not shown, photographic head 411 at least can include the first photographic head and second camera, wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview.
Further, mobile phone can also include bluetooth module etc., does not repeats them here.
In embodiments of the present invention, the memorizer 420 included by this terminal is additionally operable to storage and obtains the first view data and the instruction of the second view data that described first photographic head and described second camera shoot for same photographed scene respectively at synchronization;Obtain described first view data and the first histogram data corresponding to the second view data and the instruction of the second histogram data;Fisrt feature data are extracted from described first histogram data, and, from described second histogram data, extract the instruction of second feature data;And, based on described fisrt feature data and described second feature data, detect described first photographic head and/or instruction that whether second camera is blocked.
Processor 480 included by this terminal also has a following functions:
The first view data shot respectively for same photographed scene at synchronization according to described first photographic head of described acquisition and described second camera and the instruction of the second view data, obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
First histogram data corresponding according to described first view data of described acquisition and the second view data and the instruction of the second histogram data, obtain described first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
From described first histogram data, fisrt feature data are extracted according to described, and, from described second histogram data, extract the instruction of second feature data, from described first histogram data, extract fisrt feature data, and, from described second histogram data, extract second feature data;
According to described based on described fisrt feature data and described second feature data, detect described first photographic head and/or instruction that whether second camera is blocked, based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
Alternatively, the processor 480 included by this terminal also has a following functions:
For described fisrt feature data, obtain corresponding second feature data;
Described fisrt feature data are compared with described corresponding second feature data, it is thus achieved that comparative result;
Based on described fisrt feature data and/or described second feature data and/or described comparative result, detect described first photographic head and/or whether second camera is blocked.
Alternatively, the processor 480 included by this terminal also has a following functions:
Described fisrt feature data and/or described second feature data and/or described comparative result are carried out the first logical operations, obtains the first operation result;
If described first operation result is true, then judge that in described first photographic head and described second camera, at least one photographic head is blocked;
Described fisrt feature data and/or described second feature data and/or described comparative result being carried out the second logical operations, obtains the second operation result, wherein, described second logical operations and described first logical operations also differ;
If described second operation result is true, then judge that described first photographic head is blocked;
If described second operation result is false, then judge that whether described fisrt feature data and/or described second feature data are more than predetermined threshold value;
The most then judge that described second camera is blocked;
If it is not, then judge that described first photographic head and described second camera are blocked.
Alternatively, the processor 480 included by this terminal also has a following functions:
If described first operation result is false, then judge that described first photographic head and described second camera are not the most blocked.
Alternatively, described fisrt feature data at least include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at described first histogram peak place;Described second feature data at least include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at described second histogram peak place;Wherein, described fisrt feature number of grey levels or described second feature number of grey levels be number of pixels be not total number of the gray level of 0.
Alternatively, the processor 480 included by this terminal also has a following functions:
Calculate the difference of described first average and described second average, obtain the first difference;
Calculate the difference of described first standard deviation and described second standard deviation, obtain the second difference;
Calculate the difference of described fisrt feature number of grey levels and described second feature number of grey levels, obtain the 3rd difference;
Calculate the difference of described second feature number of grey levels and described fisrt feature number of grey levels, obtain the 4th difference;
Calculate the difference of the gray level at described first histogram peak place and the gray level at described second histogram peak place, obtain the 5th difference;
Calculate the difference of the gray level at described second histogram peak place and the gray level at described first histogram peak place, obtain the 6th difference;
With described second feature number of grey levels, described fisrt feature number of grey levels is carried out size compare, obtain the first fiducial value;
With the gray level at described second histogram peak place, the gray level at described first histogram peak place is carried out size compare, obtain the second fiducial value.
Alternatively, the processor 480 included by this terminal also has a following functions:
Based on described first difference and/or described second difference and/or described 3rd difference and/or described 4th difference and/or described 5th difference and/or described 6th difference and/or described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the first operation result.
Alternatively, the processor 480 included by this terminal also has following functions::
Based on described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the second operation result.
Alternatively, the processor 480 included by this terminal also has a following functions:
Described first histogram data and described second histogram data are normalized.
Alternatively, the processor 480 included by this terminal also has a following functions:
When detecting that described first photographic head and/or second camera are blocked, generate and block prompting;
Prompting is blocked described in representing in preview interface.
Those skilled in the art is it can be understood that arrive, for convenience and simplicity of description, the system of foregoing description, the specific works process of device and unit, it is referred to the corresponding process in preceding method embodiment, does not repeats them here.
In several embodiments provided herein, it should be understood that disclosed system, apparatus and method, can realize by another way.Such as, device embodiment described above is only schematically, such as, the division of described unit, be only a kind of logic function to divide, actual can have when realizing other dividing mode, the most multiple unit or assembly can in conjunction with or be desirably integrated into another system, or some features can ignore, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be the INDIRECT COUPLING by some interfaces, device or unit or communication connection, can be electrical, machinery or other form.
The described unit illustrated as separating component can be or may not be physically separate, and the parts shown as unit can be or may not be physical location, i.e. may be located at a place, or can also be distributed on multiple NE.Some or all of unit therein can be selected according to the actual needs to realize the purpose of the present embodiment scheme.
It addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, it is also possible to be that unit is individually physically present, it is also possible to two or more unit are integrated in a unit.Above-mentioned integrated unit both can realize to use the form of hardware, it would however also be possible to employ the form of SFU software functional unit realizes.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment can be by program and completes to instruct relevant hardware, this program can be stored in a computer-readable recording medium, storage medium may include that read only memory (ROM, ReadOnlyMemory), random access memory (RAM, RandomAccessMemory), disk or CD etc..
One of ordinary skill in the art will appreciate that all or part of step realizing in above-described embodiment method can be by program and completes to instruct relevant hardware, described program can be stored in a kind of computer-readable recording medium, storage medium mentioned above can be read only memory, disk or CD etc..
A kind of to provided by the present invention photographic head is carried out the method for occlusion detection, device and terminal unit it be described in detail above, for one of ordinary skill in the art, thought according to the embodiment of the present invention, the most all will change, in sum, this specification content should not be construed as limitation of the present invention.
The invention discloses A1, a kind of method that photographic head is carried out occlusion detection, described photographic head at least includes the first photographic head and second camera, wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview;
Described method includes:
Obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
Obtain described first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
Fisrt feature data are extracted from described first histogram data, and, from described second histogram data, extract second feature data;
Based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
A2, method as described in A1, described based on described fisrt feature data and described second feature data, detect described first photographic head and/or step that whether second camera is blocked includes:
For described fisrt feature data, obtain corresponding second feature data;
Described fisrt feature data are compared with described corresponding second feature data, it is thus achieved that comparative result;
Based on described fisrt feature data and/or described second feature data and/or described comparative result, detect described first photographic head and/or whether second camera is blocked.
A3, method as described in A2, described based on described fisrt feature data and/or described second feature data and/or described comparative result, detect described first photographic head and/or step that whether second camera is blocked includes:
Described fisrt feature data and/or described second feature data and/or described comparative result are carried out the first logical operations, obtains the first operation result;
If described first operation result is true, then judge that in described first photographic head and described second camera, at least one photographic head is blocked;
Described fisrt feature data and/or described second feature data and/or described comparative result being carried out the second logical operations, obtains the second operation result, wherein, described second logical operations and described first logical operations also differ;
If described second operation result is true, then judge that described first photographic head is blocked;
If described second operation result is false, then judge that whether described fisrt feature data and/or described second feature data are more than predetermined threshold value;
The most then judge that described second camera is blocked;
If it is not, then judge that described first photographic head and described second camera are blocked.
A4, method as described in A3, also include:
If described first operation result is false, then judge that described first photographic head and described second camera are not the most blocked.
A5, method as described in A3 or A4, described fisrt feature data at least include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at described first histogram peak place;
Described second feature data at least include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at described second histogram peak place;
Wherein, described fisrt feature number of grey levels or described second feature number of grey levels be number of pixels be not total number of the gray level of 0.
Described fisrt feature data described are compared by A6, method as described in A5 with described corresponding second feature data, it is thus achieved that the step of comparative result includes:
Calculate the difference of described first average and described second average, obtain the first difference;
Calculate the difference of described first standard deviation and described second standard deviation, obtain the second difference;
Calculate the difference of described fisrt feature number of grey levels and described second feature number of grey levels, obtain the 3rd difference;
Calculate the difference of described second feature number of grey levels and described fisrt feature number of grey levels, obtain the 4th difference;
Calculate the difference of the gray level at described first histogram peak place and the gray level at described second histogram peak place, obtain the 5th difference;
Calculate the difference of the gray level at described second histogram peak place and the gray level at described first histogram peak place, obtain the 6th difference;
With described second feature number of grey levels, described fisrt feature number of grey levels is carried out size compare, obtain the first fiducial value;
With the gray level at described second histogram peak place, the gray level at described first histogram peak place is carried out size compare, obtain the second fiducial value.
Described fisrt feature data and/or described second feature data and/or described comparative result described are carried out the first logical operations by A7, method as described in A6, and the step obtaining the first operation result includes:
Based on described first difference and/or described second difference and/or described 3rd difference and/or described 4th difference and/or described 5th difference and/or described 6th difference and/or described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the first operation result.
Described fisrt feature data and/or described second feature data and/or described comparative result described are carried out the second logical operations by A8, method as described in A6 or A7, and the step obtaining the second operation result includes:
Based on described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the second operation result.
A9, method as described in A1, in described fisrt feature data of extracting from described first histogram data, and, before extracting the step of second feature data from described second histogram data, also include:
Described first histogram data and described second histogram data are normalized.
A10, as described in A1 method, also include:
When detecting that described first photographic head and/or second camera are blocked, generate and block prompting;
Prompting is blocked described in representing in preview interface.
The invention also discloses B11, a kind of device that photographic head is carried out occlusion detection, described photographic head at least includes the first photographic head and second camera, wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview;
Described device includes:
Image taking module, is suitable to obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
Histogram data acquisition module, is suitable to obtain respectively described first view data and the first histogram data corresponding to the second view data and the second histogram data;
Characteristic extracting module, is suitable to extract fisrt feature data from described first histogram data, and, from described second histogram data, extract second feature data;
Occlusion detection module, is suitable to, based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
B12, device as described in B11, described occlusion detection module is further adapted for:
For described fisrt feature data, obtain corresponding second feature data;
Described fisrt feature data are compared with described corresponding second feature data, it is thus achieved that comparative result;
Based on described fisrt feature data and/or described second feature data and/or described comparative result, detect described first photographic head and/or whether second camera is blocked.
B13, device as described in B12, described occlusion detection module is further adapted for:
Described fisrt feature data and/or described second feature data and/or described comparative result are carried out the first logical operations, obtains the first operation result;
If described first operation result is true, then judge that in described first photographic head and described second camera, at least one photographic head is blocked;
Described fisrt feature data and/or described second feature data and/or described comparative result being carried out the second logical operations, obtains the second operation result, wherein, described second logical operations and described first logical operations also differ;
If described second operation result is true, then judge that described first photographic head is blocked;
If described second operation result is false, then judge that whether described fisrt feature data and/or described second feature data are more than predetermined threshold value;
The most then judge that described second camera is blocked;
If it is not, then judge that described first photographic head and described second camera are blocked.
B14, device as described in B13, also include:
Blocking determination module, being suitable at described first operation result is fictitious time, then judge that described first photographic head and described second camera are not the most blocked.
B15, device as described in B13 or B14, described fisrt feature data at least include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at described first histogram peak place;Described second feature data at least include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at described second histogram peak place;Wherein, described fisrt feature number of grey levels or described second feature number of grey levels be number of pixels be not total number of the gray level of 0.
B16, device as described in B15, described occlusion detection module is further adapted for:
Calculate the difference of described first average and described second average, obtain the first difference;
Calculate the difference of described first standard deviation and described second standard deviation, obtain the second difference;
Calculate the difference of described fisrt feature number of grey levels and described second feature number of grey levels, obtain the 3rd difference;
Calculate the difference of described second feature number of grey levels and described fisrt feature number of grey levels, obtain the 4th difference;
Calculate the difference of the gray level at described first histogram peak place and the gray level at described second histogram peak place, obtain the 5th difference;
Calculate the difference of the gray level at described second histogram peak place and the gray level at described first histogram peak place, obtain the 6th difference;
With described second feature number of grey levels, described fisrt feature number of grey levels is carried out size compare, obtain the first fiducial value;
With the gray level at described second histogram peak place, the gray level at described first histogram peak place is carried out size compare, obtain the second fiducial value.
B17, device as described in B16, described occlusion detection module is further adapted for:
Based on described first difference and/or described second difference and/or described 3rd difference and/or described 4th difference and/or described 5th difference and/or described 6th difference and/or described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the first operation result.
B18, device as described in B16 or B17, described occlusion detection module is further adapted for:
Based on described first fiducial value and/or described second fiducial value, carry out logical operations in conjunction with the gray level at described fisrt feature number of grey levels and/or described second feature number of grey levels and/or the gray level at described first histogram peak place and/or described second histogram peak place with the comparative result of corresponding multiple predetermined threshold value, obtain the second operation result.
B19, device as described in B11, also include:
Normalized module, is suitable to be normalized described first histogram data and described second histogram data.
B20, as described in B11 device, also include:
Reminding module, is suitable to when detecting that described first photographic head and/or second camera are blocked, and generates and blocks prompting, and blocks prompting described in representing in preview interface.
The invention also discloses C21, a kind of terminal unit, including photographic head, memorizer and processor;
Wherein, described photographic head at least includes the first photographic head and second camera, and wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview;
Described memorizer obtains, for storing, the first view data and the instruction of the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;Obtain described first view data and the first histogram data corresponding to the second view data and the instruction of the second histogram data;Fisrt feature data are extracted from described first histogram data, and, from described second histogram data, extract the instruction of second feature data;And, based on described fisrt feature data and described second feature data, detect described first photographic head and/or instruction that whether second camera is blocked;
Described processor is used for:
The first view data shot respectively for same photographed scene at synchronization according to described first photographic head of described acquisition and described second camera and the instruction of the second view data, obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
First histogram data corresponding according to described first view data of described acquisition and the second view data and the instruction of the second histogram data, obtain described first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
From described first histogram data, fisrt feature data are extracted according to described, and, from described second histogram data, extract the instruction of second feature data, from described first histogram data, extract fisrt feature data, and, from described second histogram data, extract second feature data;
According to described based on described fisrt feature data and described second feature data, detect described first photographic head and/or instruction that whether second camera is blocked, based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
Claims (10)
1. the method that photographic head is carried out occlusion detection, described photographic head at least includes the first photographic head and second camera, wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview;
Described method includes:
Obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
Obtain described first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
Fisrt feature data are extracted from described first histogram data, and, from described second histogram data, extract second feature data;
Based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
2. the method for claim 1, it is characterised in that described based on described fisrt feature data and described second feature data, detects described first photographic head and/or step that whether second camera is blocked includes:
For described fisrt feature data, obtain corresponding second feature data;
Described fisrt feature data are compared with described corresponding second feature data, it is thus achieved that comparative result;
Based on described fisrt feature data and/or described second feature data and/or described comparative result, detect described first photographic head and/or whether second camera is blocked.
3. method as claimed in claim 2, it is characterised in that described based on described fisrt feature data and/or described second feature data and/or described comparative result, detects described first photographic head and/or step that whether second camera is blocked includes:
Described fisrt feature data and/or described second feature data and/or described comparative result are carried out the first logical operations, obtains the first operation result;
If described first operation result is true, then judge that in described first photographic head and described second camera, at least one photographic head is blocked;
Described fisrt feature data and/or described second feature data and/or described comparative result being carried out the second logical operations, obtains the second operation result, wherein, described second logical operations and described first logical operations also differ;
If described second operation result is true, then judge that described first photographic head is blocked;
If described second operation result is false, then judge that whether described fisrt feature data and/or described second feature data are more than predetermined threshold value;
The most then judge that described second camera is blocked;
If it is not, then judge that described first photographic head and described second camera are blocked.
4. method as claimed in claim 3, it is characterised in that also include:
If described first operation result is false, then judge that described first photographic head and described second camera are not the most blocked.
5. the method as described in claim 3 or 4, it is characterized in that, described fisrt feature data at least include one or more of following statistic: the first average, the first standard deviation, fisrt feature number of grey levels, the first histogram peak and the gray level at described first histogram peak place;
Described second feature data at least include one or more of following statistic: the second average, the second standard deviation, second feature number of grey levels, the second histogram peak and the gray level at described second histogram peak place;
Wherein, described fisrt feature number of grey levels or described second feature number of grey levels be number of pixels be not total number of the gray level of 0.
6. method as claimed in claim 5, it is characterised in that described described fisrt feature data are compared with described corresponding second feature data, it is thus achieved that the step of comparative result includes:
Calculate the difference of described first average and described second average, obtain the first difference;
Calculate the difference of described first standard deviation and described second standard deviation, obtain the second difference;
Calculate the difference of described fisrt feature number of grey levels and described second feature number of grey levels, obtain the 3rd difference;
Calculate the difference of described second feature number of grey levels and described fisrt feature number of grey levels, obtain the 4th difference;
Calculate the difference of the gray level at described first histogram peak place and the gray level at described second histogram peak place, obtain the 5th difference;
Calculate the difference of the gray level at described second histogram peak place and the gray level at described first histogram peak place, obtain the 6th difference;
With described second feature number of grey levels, described fisrt feature number of grey levels is carried out size compare, obtain the first fiducial value;
With the gray level at described second histogram peak place, the gray level at described first histogram peak place is carried out size compare, obtain the second fiducial value.
7. the method for claim 1, it is characterised in that in described fisrt feature data of extracting from described first histogram data, and, before extracting the step of second feature data from described second histogram data, also include:
Described first histogram data and described second histogram data are normalized.
8. method as claimed in claim 1, it is characterised in that also include:
When detecting that described first photographic head and/or second camera are blocked, generate and block prompting;
Prompting is blocked described in representing in preview interface.
9. the device that photographic head is carried out occlusion detection, described photographic head at least includes the first photographic head and second camera, wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview;
Described device includes:
Image taking module, is suitable to obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
Histogram data acquisition module, is suitable to obtain respectively described first view data and the first histogram data corresponding to the second view data and the second histogram data;
Characteristic extracting module, is suitable to extract fisrt feature data from described first histogram data, and, from described second histogram data, extract second feature data;
Occlusion detection module, is suitable to, based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
10. a terminal unit, including photographic head, memorizer and processor;
Wherein, described photographic head at least includes the first photographic head and second camera, and wherein, the first view data that described first photographic head is absorbed can carry out image preview, and the second view data that described second camera is absorbed can not carry out image preview;
Described memorizer obtains, for storing, the first view data and the instruction of the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;Obtain described first view data and the first histogram data corresponding to the second view data and the instruction of the second histogram data;Fisrt feature data are extracted from described first histogram data, and, from described second histogram data, extract the instruction of second feature data;And, based on described fisrt feature data and described second feature data, detect described first photographic head and/or instruction that whether second camera is blocked;
Described processor is used for:
The first view data shot respectively for same photographed scene at synchronization according to described first photographic head of described acquisition and described second camera and the instruction of the second view data, obtain the first view data and the second view data that described first photographic head and described second camera shoot respectively at synchronization for same photographed scene;
First histogram data corresponding according to described first view data of described acquisition and the second view data and the instruction of the second histogram data, obtain described first view data and the first histogram data corresponding to the second view data and the second histogram data respectively;
From described first histogram data, fisrt feature data are extracted according to described, and, from described second histogram data, extract the instruction of second feature data, from described first histogram data, extract fisrt feature data, and, from described second histogram data, extract second feature data;
According to described based on described fisrt feature data and described second feature data, detect described first photographic head and/or instruction that whether second camera is blocked, based on described fisrt feature data and described second feature data, detect described first photographic head and/or whether second camera is blocked.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610298833.XA CN105828068A (en) | 2016-05-06 | 2016-05-06 | Method and device for carrying out occlusion detection on camera and terminal device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610298833.XA CN105828068A (en) | 2016-05-06 | 2016-05-06 | Method and device for carrying out occlusion detection on camera and terminal device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105828068A true CN105828068A (en) | 2016-08-03 |
Family
ID=56528428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610298833.XA Pending CN105828068A (en) | 2016-05-06 | 2016-05-06 | Method and device for carrying out occlusion detection on camera and terminal device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105828068A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106358032A (en) * | 2016-08-22 | 2017-01-25 | 上嘉(天津)文化传播有限公司 | Image data control system based on Internet of Things |
CN106572349A (en) * | 2016-11-18 | 2017-04-19 | 维沃移动通信有限公司 | Camera cleanliness detection method and mobile terminal |
CN106709882A (en) * | 2016-12-16 | 2017-05-24 | 努比亚技术有限公司 | Image fusion method and device |
CN107197148A (en) * | 2017-06-14 | 2017-09-22 | 深圳传音通讯有限公司 | Image pickup method, device and electronic equipment |
CN107483895A (en) * | 2017-09-14 | 2017-12-15 | 移康智能科技(上海)股份有限公司 | The acquisition methods and device of monitoring camera equipment, monitoring image |
CN108280828A (en) * | 2018-01-25 | 2018-07-13 | 上海闻泰电子科技有限公司 | Camera rigging position detection method and device |
CN108391036A (en) * | 2018-03-28 | 2018-08-10 | 东风商用车有限公司 | Vehicle-mounted camera device capable of detecting degradation of perception function and detection method thereof |
CN109525837A (en) * | 2018-11-26 | 2019-03-26 | 维沃移动通信有限公司 | The generation method and mobile terminal of image |
CN110532876A (en) * | 2019-07-26 | 2019-12-03 | 纵目科技(上海)股份有限公司 | Night mode camera lens pays detection method, system, terminal and the storage medium of object |
CN110544211A (en) * | 2019-07-26 | 2019-12-06 | 纵目科技(上海)股份有限公司 | method, system, terminal and storage medium for detecting lens attachment |
CN110971785A (en) * | 2019-11-15 | 2020-04-07 | 北京迈格威科技有限公司 | Camera shielding state detection method and device, terminal and storage medium |
CN111080571A (en) * | 2019-11-15 | 2020-04-28 | 北京迈格威科技有限公司 | Camera shielding state detection method and device, terminal and storage medium |
CN111885371A (en) * | 2020-06-01 | 2020-11-03 | 北京迈格威科技有限公司 | Image occlusion detection method and device, electronic equipment and computer readable medium |
CN111970405A (en) * | 2020-08-21 | 2020-11-20 | Oppo(重庆)智能科技有限公司 | Camera shielding detection method, storage medium, electronic device and device |
CN112990309A (en) * | 2021-03-12 | 2021-06-18 | 随锐科技集团股份有限公司 | Method and system for detecting whether foreign matter shielding exists in instrument equipment |
CN114079766A (en) * | 2020-08-10 | 2022-02-22 | 珠海格力电器股份有限公司 | Method for prompting shielding of camera under screen, storage medium and terminal equipment |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102231223A (en) * | 2011-06-02 | 2011-11-02 | 深圳中兴力维技术有限公司 | Foreign object blocking and alarming method used for camera |
WO2014017317A1 (en) * | 2012-07-27 | 2014-01-30 | 日産自動車株式会社 | Three-dimensional object detection device and foreign object detection device |
CN103685948A (en) * | 2013-12-04 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | Shooting method and device |
CN104240235A (en) * | 2014-08-26 | 2014-12-24 | 北京君正集成电路股份有限公司 | Method and system for detecting whether camera is covered or not |
CN104539939A (en) * | 2014-12-17 | 2015-04-22 | 惠州Tcl移动通信有限公司 | Lens cleanliness detection method and system based on mobile terminal |
CN104637068A (en) * | 2013-11-14 | 2015-05-20 | 华为技术有限公司 | Detection method and detection device for shielding of video frames and video pictures |
CN104657993A (en) * | 2015-02-12 | 2015-05-27 | 北京格灵深瞳信息技术有限公司 | Lens shielding detection method and device |
CN104699391A (en) * | 2015-04-07 | 2015-06-10 | 联想(北京)有限公司 | Electronic equipment and control method for cameras thereof |
WO2015085034A1 (en) * | 2013-12-06 | 2015-06-11 | Google Inc. | Camera selection based on occlusion of field of view |
CN104811690A (en) * | 2015-04-01 | 2015-07-29 | 广东欧珀移动通信有限公司 | Message prompting method and device |
CN105122794A (en) * | 2013-04-02 | 2015-12-02 | 谷歌公司 | Camera obstruction detection |
CN105163110A (en) * | 2015-09-02 | 2015-12-16 | 厦门美图之家科技有限公司 | Camera cleanliness detection method and system and shooting terminal |
CN105828067A (en) * | 2016-04-19 | 2016-08-03 | 奇酷互联网络科技(深圳)有限公司 | Terminal, method and device for determining whether two cameras are occluded |
CN105915785A (en) * | 2016-04-19 | 2016-08-31 | 奇酷互联网络科技(深圳)有限公司 | Double-camera shadedness determining method and device, and terminal |
-
2016
- 2016-05-06 CN CN201610298833.XA patent/CN105828068A/en active Pending
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102231223A (en) * | 2011-06-02 | 2011-11-02 | 深圳中兴力维技术有限公司 | Foreign object blocking and alarming method used for camera |
WO2014017317A1 (en) * | 2012-07-27 | 2014-01-30 | 日産自動車株式会社 | Three-dimensional object detection device and foreign object detection device |
CN105122794A (en) * | 2013-04-02 | 2015-12-02 | 谷歌公司 | Camera obstruction detection |
CN104637068A (en) * | 2013-11-14 | 2015-05-20 | 华为技术有限公司 | Detection method and detection device for shielding of video frames and video pictures |
CN103685948A (en) * | 2013-12-04 | 2014-03-26 | 乐视致新电子科技(天津)有限公司 | Shooting method and device |
WO2015085034A1 (en) * | 2013-12-06 | 2015-06-11 | Google Inc. | Camera selection based on occlusion of field of view |
CN104240235A (en) * | 2014-08-26 | 2014-12-24 | 北京君正集成电路股份有限公司 | Method and system for detecting whether camera is covered or not |
CN104539939A (en) * | 2014-12-17 | 2015-04-22 | 惠州Tcl移动通信有限公司 | Lens cleanliness detection method and system based on mobile terminal |
CN104657993A (en) * | 2015-02-12 | 2015-05-27 | 北京格灵深瞳信息技术有限公司 | Lens shielding detection method and device |
CN104811690A (en) * | 2015-04-01 | 2015-07-29 | 广东欧珀移动通信有限公司 | Message prompting method and device |
CN104699391A (en) * | 2015-04-07 | 2015-06-10 | 联想(北京)有限公司 | Electronic equipment and control method for cameras thereof |
CN105163110A (en) * | 2015-09-02 | 2015-12-16 | 厦门美图之家科技有限公司 | Camera cleanliness detection method and system and shooting terminal |
CN105828067A (en) * | 2016-04-19 | 2016-08-03 | 奇酷互联网络科技(深圳)有限公司 | Terminal, method and device for determining whether two cameras are occluded |
CN105915785A (en) * | 2016-04-19 | 2016-08-31 | 奇酷互联网络科技(深圳)有限公司 | Double-camera shadedness determining method and device, and terminal |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106358032A (en) * | 2016-08-22 | 2017-01-25 | 上嘉(天津)文化传播有限公司 | Image data control system based on Internet of Things |
CN106572349A (en) * | 2016-11-18 | 2017-04-19 | 维沃移动通信有限公司 | Camera cleanliness detection method and mobile terminal |
CN106709882A (en) * | 2016-12-16 | 2017-05-24 | 努比亚技术有限公司 | Image fusion method and device |
CN107197148A (en) * | 2017-06-14 | 2017-09-22 | 深圳传音通讯有限公司 | Image pickup method, device and electronic equipment |
CN107483895A (en) * | 2017-09-14 | 2017-12-15 | 移康智能科技(上海)股份有限公司 | The acquisition methods and device of monitoring camera equipment, monitoring image |
CN108280828A (en) * | 2018-01-25 | 2018-07-13 | 上海闻泰电子科技有限公司 | Camera rigging position detection method and device |
CN108391036B (en) * | 2018-03-28 | 2023-07-11 | 东风商用车有限公司 | Vehicle-mounted camera device capable of detecting degradation of sensing function and detection method thereof |
CN108391036A (en) * | 2018-03-28 | 2018-08-10 | 东风商用车有限公司 | Vehicle-mounted camera device capable of detecting degradation of perception function and detection method thereof |
CN109525837A (en) * | 2018-11-26 | 2019-03-26 | 维沃移动通信有限公司 | The generation method and mobile terminal of image |
CN110532876A (en) * | 2019-07-26 | 2019-12-03 | 纵目科技(上海)股份有限公司 | Night mode camera lens pays detection method, system, terminal and the storage medium of object |
CN110544211A (en) * | 2019-07-26 | 2019-12-06 | 纵目科技(上海)股份有限公司 | method, system, terminal and storage medium for detecting lens attachment |
CN110544211B (en) * | 2019-07-26 | 2024-02-09 | 纵目科技(上海)股份有限公司 | Method, system, terminal and storage medium for detecting lens attached object |
CN110971785A (en) * | 2019-11-15 | 2020-04-07 | 北京迈格威科技有限公司 | Camera shielding state detection method and device, terminal and storage medium |
CN110971785B (en) * | 2019-11-15 | 2022-04-29 | 北京迈格威科技有限公司 | Camera shielding state detection method and device, terminal and storage medium |
CN111080571B (en) * | 2019-11-15 | 2023-10-20 | 北京迈格威科技有限公司 | Camera shielding state detection method, device, terminal and storage medium |
CN111080571A (en) * | 2019-11-15 | 2020-04-28 | 北京迈格威科技有限公司 | Camera shielding state detection method and device, terminal and storage medium |
CN111885371A (en) * | 2020-06-01 | 2020-11-03 | 北京迈格威科技有限公司 | Image occlusion detection method and device, electronic equipment and computer readable medium |
CN114079766A (en) * | 2020-08-10 | 2022-02-22 | 珠海格力电器股份有限公司 | Method for prompting shielding of camera under screen, storage medium and terminal equipment |
CN114079766B (en) * | 2020-08-10 | 2023-08-11 | 珠海格力电器股份有限公司 | Under-screen camera shielding prompting method, storage medium and terminal equipment |
CN111970405A (en) * | 2020-08-21 | 2020-11-20 | Oppo(重庆)智能科技有限公司 | Camera shielding detection method, storage medium, electronic device and device |
CN112990309A (en) * | 2021-03-12 | 2021-06-18 | 随锐科技集团股份有限公司 | Method and system for detecting whether foreign matter shielding exists in instrument equipment |
CN112990309B (en) * | 2021-03-12 | 2023-11-28 | 随锐科技集团股份有限公司 | Method and system for detecting whether foreign matter shielding exists in instrument equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105828068A (en) | Method and device for carrying out occlusion detection on camera and terminal device | |
CN103871051B (en) | Image processing method, device and electronic equipment | |
CN103400108B (en) | Face identification method, device and mobile terminal | |
CN108024065B (en) | Terminal shooting method, terminal and computer readable storage medium | |
US11363196B2 (en) | Image selection method and related product | |
CN108605099A (en) | The method and terminal taken pictures for terminal | |
CN104143078A (en) | Living body face recognition method and device and equipment | |
CN107122760A (en) | Fingerprint identification method and related product | |
CN107197146A (en) | Image processing method and related product | |
CN106991034A (en) | A kind of method and apparatus and mobile terminal for monitoring interim card | |
CN107403147A (en) | Living iris detection method and Related product | |
CN110852951B (en) | Image processing method, device, terminal equipment and computer readable storage medium | |
CN110209245A (en) | Face identification method and Related product | |
CN106101529A (en) | A kind of camera control method and mobile terminal | |
CN106371086A (en) | Distance measurement method and device | |
US10636122B2 (en) | Method, device and nonvolatile computer-readable medium for image composition | |
CN113888452A (en) | Image fusion method, electronic device, storage medium, and computer program product | |
CN107864299B (en) | Picture display method and related product | |
US11200437B2 (en) | Method for iris-based living body detection and related products | |
CN109190448A (en) | Face identification method and device | |
CN106648460B (en) | Step counting data filtering method and intelligent terminal | |
CN109561255B (en) | Terminal photographing method and device and storage medium | |
US20190019027A1 (en) | Method and mobile terminal for processing image and storage medium | |
CN110086987B (en) | Camera visual angle cutting method and device and storage medium | |
CN107566654B (en) | Unlocking control method and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20160803 |