CN104933696B - Determine the method and electronic equipment of light conditions - Google Patents
Determine the method and electronic equipment of light conditions Download PDFInfo
- Publication number
- CN104933696B CN104933696B CN201410108102.5A CN201410108102A CN104933696B CN 104933696 B CN104933696 B CN 104933696B CN 201410108102 A CN201410108102 A CN 201410108102A CN 104933696 B CN104933696 B CN 104933696B
- Authority
- CN
- China
- Prior art keywords
- region
- target area
- brightness
- image
- brightness value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method and electronic equipment for determining light conditions, belong to technical field of image processing.Method includes:The first image is obtained, the first image includes target area;First image is detected, obtains target area;Texture filtering processing is carried out to target area, obtains the Part I region in target area, Part I region is the region residing for the texture in target area;Part II region is obtained, Part II region is not overlapping with Part I region, and Part II region can be combined as target area with Part I region;The brightness value in Part II region is detected, the light conditions of target area are determined according to brightness value.The present invention is after carrying out texture filtering to target area and obtaining Part II region, brightness value based on Part II region is judged the light conditions of target area, because Part II region is smooth region, the situation that height rises and falls is not present in region, so the judgement to light conditions is more accurate.
Description
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of method and electronic equipment for determining light conditions.
Background technology
Illumination variation easily make comprising target object image intensity profile it is uneven, cause local contrast difference compared with
Greatly, so influence target object tracking, target object identification, target object image processing or 3D (Three Dimensions, three
Dimension) rebuild effect.Therefore, when handling the image containing target object, how to determine whether illumination is uniform, turns into
One urgent problem to be solved of technical field of image processing.
Prior art typically directly carries out brightness to the whole target area in image and estimated when it is determined that whether illumination is uniform
Meter, and then obtain illumination judged result.Because the target area in image is not plane, the shadow of different high bottom surfaces can be serious
The judgement precision to light conditions is influenceed, precision is not high to be lacked so light conditions determination mode of the prior art is present
Fall into.
The content of the invention
In order to solve problem of the prior art, the embodiments of the invention provide a kind of method and electronics for determining light conditions
Equipment.The technical scheme is as follows:
On the one hand, there is provided a kind of method for determining light conditions, methods described include:
The first image is obtained, described first image includes target area;
Described first image is detected, obtains the target area;
Texture filtering processing is carried out to the target area, obtains the Part I region in the target area, it is described
Part I region is the region residing for the texture in the target area;
Part II region is obtained, the Part II region and the Part I region be not overlapping, described second
Subregion can be combined as the target area with the Part I region;
The brightness value in the Part II region is detected, the illumination feelings of the target area are determined according to the brightness value
Condition.
Alternatively, it is described that described first image is detected, target area is obtained, including:
Target object search is carried out in described first image, determines position and the size of the target object;According to institute
Position and the size of target object are stated, determines the target area;Or,
Obtain the operation for described first image;According to the region of the sensing of the operation, the target area is determined.
Alternatively, the brightness value in the detection Part II region, the target area is determined according to the brightness value
The light conditions in domain, including:
Determine the brightness value of each pixel in the Part II region;
According to the brightness value of each pixel, the average brightness in the Part II region is calculated;
According to the brightness value of each pixel and the average brightness, the illumination feelings of the target area are determined
Condition.
Alternatively, the brightness value according to each pixel and the average brightness, determine the target area
The light conditions in domain, including:
According to the brightness value of each pixel and the average brightness, the brightness in the Part II region is determined
Variance yields;
When the brightness variance yields in the Part II region is less than predetermined threshold value, determine that the target area illumination is equal
It is even.
Alternatively, the brightness value according to each pixel and the average brightness, determine described second
After subregional brightness variance yields, methods described also includes:
When the brightness variance yields in the Part II region is more than the predetermined threshold value, the target area illumination is determined
It is uneven, illumination compensation process is carried out to the target area.
Alternatively, the target area is human face region.
On the other hand, there is provided a kind of electronic equipment, the electronic equipment include:
First acquisition module, for obtaining the first image, described first image includes target area;
Detection module, for being detected to described first image, obtain the target area;
Processing module, for carrying out texture filtering processing to the target area, obtain first in the target area
Subregion, the Part I region are the region residing for the texture in the target area;
Second acquisition module, for obtaining Part II region, the Part II region and the Part I region
Not overlapping, the Part II region can be combined as the target area with the Part I region;
Determining module, for detecting the brightness value in the Part II region, the target is determined according to the brightness value
The light conditions in region.
Alternatively, the detection module, for carrying out target object search in described first image, the target is determined
The position of object and size;According to the position of the target object and size, the target area is determined;Or, obtain for institute
State the operation of the first image;According to the region of the sensing of the operation, the target area is determined.
Alternatively, the determining module, including:
First determining unit, for determining the brightness value of each pixel in the Part II region;
Computing unit, for the brightness value according to each pixel, the brightness for calculating the Part II region is put down
Average;
Second determining unit, for the brightness value according to each pixel and the average brightness, it is determined that described
The light conditions of target area.
Alternatively, second determining unit, it is averaged for the brightness value according to each pixel and the brightness
Value, determine the brightness variance yields in the Part II region;When the brightness variance yields in the Part II region is less than default threshold
During value, the target area uniform illumination is determined.
Alternatively, the second unit, it is additionally operable to when the brightness variance yields in the Part II region is more than described preset
During threshold value, determine that the target area uneven illumination is even, illumination compensation process is carried out to the target area.
Alternatively, the target area is human face region.
The beneficial effect that technical scheme provided in an embodiment of the present invention is brought is:
Behind the target area in detecting the first image, texture filtering processing is carried out to target area, obtains second
Subregion, and then the brightness value based on Part II region is judged the light conditions of target area, due to Part II
Region is smooth region, and the situation that height rises and falls is not present in region, so the judgement to light conditions is more accurate, improves
Judge precision.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, make required in being described below to embodiment
Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are only some embodiments of the present invention, for
For those of ordinary skill in the art, on the premise of not paying creative work, other can also be obtained according to these accompanying drawings
Accompanying drawing.
Fig. 1 is a kind of method flow diagram for determination light conditions that the embodiment of the present invention one provides;
Fig. 2 is a kind of method flow diagram for determination light conditions that the embodiment of the present invention two provides;
Fig. 3 is the structural representation for a kind of electronic equipment that the embodiment of the present invention three provides.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention
Formula is described in further detail.
Fig. 1 is a kind of method flow diagram for determining light conditions provided in an embodiment of the present invention.Referring to Fig. 1, the present embodiment
The method flow of offer includes:
101st, the first image is obtained, the first image includes target area.
102nd, the first image is detected, obtains target area.
103rd, texture filtering processing is carried out to target area, obtains the Part I region in target area, Part I
Region is the region residing for the texture in target area.
104th, obtain Part II region, Part II region is not overlapping with Part I region, Part II region and
Part I region can be combined as target area.
105th, the brightness value in Part II region is detected, the light conditions of target area are determined according to brightness value.
Method provided in an embodiment of the present invention, behind the target area in detecting the first image, target area is carried out
Texture filtering processing, obtains Part II region, and then illumination feelings of the brightness value based on Part II region to target area
Condition is judged, because Part II region is smooth region, the situation that height rises and falls is not present in region, so to illumination feelings
The judgement of condition is more accurate, improves judgement precision.
Alternatively, the first image is detected, obtains target area, including:
Target object search is carried out in the first image, determines position and the size of target object;According to target object
Position and size, determine target area;Or,
Obtain the operation for the first image;According to the region of the sensing of operation, target area is determined.
Alternatively, the brightness value in Part II region is detected, the light conditions of target area, bag are determined according to brightness value
Include:
Determine the brightness value of each pixel in Part II region;
According to the brightness value of each pixel, the average brightness in calculating Part II region;
According to the brightness value and average brightness of each pixel, the light conditions of target area are determined.
Alternatively, according to the brightness value and average brightness of each pixel, the light conditions of target area is determined, are wrapped
Include:
According to the brightness value and average brightness of each pixel, the brightness variance yields in Part II region is determined;
When the brightness variance yields in Part II region is less than predetermined threshold value, target area uniform illumination is determined.
Alternatively, according to the brightness value and average brightness of each pixel, the brightness variance in Part II region is determined
After value, method also includes:
When the brightness variance yields in Part II region is more than predetermined threshold value, determine that target area uneven illumination is even, to mesh
Mark region and carry out illumination compensation process.
Alternatively, target area is human face region.
Above-mentioned all optional technical schemes, any combination can be used to form the alternative embodiment of the present invention, herein no longer
Repeat one by one.
Fig. 2 is a kind of method flow diagram for determining light conditions provided in an embodiment of the present invention.Using target area as face
Exemplified by region, referring to Fig. 2, the method flow that the present embodiment provides includes:
201st, the first image is obtained, the first image includes human face region.
In embodiments of the present invention, due to being influenceed by light conditions, the intensity profile situation of the first image of shooting
Have differences.When uniform illumination, the intensity profile of the first image is also uniform;When uneven illumination is even, the gray scale of the first image
Distribution is also uneven.For the first image including human face region, if illumination patterns are uneven, enter to the first image
When row recognition of face or face tracking etc. are handled, recognition result and tracking result will be had a strong impact on.Illumination variation is that face is known
One of technical bottleneck of other technology or face tracking technology etc..
Alternatively, when obtaining the first image, can be obtained from the RAM card of storage image, also can directly from camera or
Obtained on the shooting image sensor of video camera, which kind of mode the first image is specifically obtained using, the present embodiment is not made to have to this
Body limits.Wherein, human face region, which refers to the first image, includes the region of face.Face number in the human face region may be either
One or multiple, the present embodiment is not especially limited to this.No matter the face number in human face region is how many, is carried out
The determination mode of light conditions is consistent.
202nd, the first image is detected, obtains human face region.
For piece image, electronic equipment simultaneously is unaware of in the image whether including human face region, or human face region
Present position in the picture.So the first image need to be detected using human face detection tech.
Alternatively, electronic equipment is detected to obtain the specific implementation of human face region to the first image, can be taken down
State two ways realization:
First way, face search is carried out in the first image, determine position and the size of face;According to the position of face
Put and size, determine human face region.
For first way, this kind of mode is automatic detection mode., can profit when face search is carried out in the first image
Realized with neural network algorithm.The neural network algorithm summarizes people by being analyzed and summarized to magnanimity face training sample
The visual signature information of face.Visual signature information based on face, face search is carried out in the first image.When searching face
Afterwards, further according to the visual signature information of face in the first image throughout scope determine the position of face and big
It is small.According to the position of face and size, human face region is determined, the human face region comprises at least a complete face.
The second way, obtain the operation for being directed to the first image;According to the region of the sensing of operation, human face region is determined.
For the second way, this kind of mode is manual detection mode.That is, operation of the human face region via electronic equipment
Person determines.First image can be shown via the display interface of electronic equipment, then according to operator on an electronic device
The operation for the first image performed, determines human face region.Such as if operator show the first image display interface on
A circle is drawn(Electronic equipment detects one circular trace of appearance on display interface), then it is the interior zone of the circle is true
It is set to human face region.Or operator operates the pointer mark of electronic equipment, according to the motion track of pointer mark, determines people
Face region.Such as when the motion track for detecting pointer mark is a square, then it is the square interior zone is true
It is set to human face region.
203rd, texture filtering processing is carried out to human face region, obtains the Part I region in human face region, Part I
Region is the region residing for the texture in human face region.
In embodiments of the present invention, because undressed target area both includes Part I region(Residing for texture
Region), also including Part II region(Non-grain region), and the region residing for texture not only exists due to not being a plane
Rough sensation is visually given people, and when carrying out light conditions judgement to human face region, Part I region is due to flat
The reason for face height rises and falls, there can be the shadow on different high ground, this can cause bad to the light conditions judgement of human face region
Influence, so when carrying out light conditions judgement to human face region, texture filtering processing need to be carried out to human face region first.
Alternatively, to human face region carry out texture filtering processing when, can take existing closest filtering interpolation mode,
Bilinear filter mode, three linear filtering modes etc., the present embodiment is not especially limited to this.
204th, obtain Part II region, Part II region is not overlapping with Part I region, Part II region and
Part I region can be combined as human face region.
In embodiments of the present invention, after texture filtering processing is carried out to human face region according to above-mentioned steps 203, just
To Part II region, the Part II region is non-grain region, and the region includes textural characteristics.Wherein, Part I
Region and Part II region are two differences and independent region, and human face region is just obtained after the two is combined.
205th, the brightness value of each pixel in Part II region is determined.
In embodiments of the present invention, it is determined that during the brightness value of each pixel in Part II region, can specifically take
Following manner is realized:The color component R, color component G and color component B of each pixel are extracted, to the face of each pixel
Colouring component R, color component G and color component B are weighted summation, obtain the brightness value of each pixel.Wherein, color component
R, weight corresponding to color component G and color component B is the numerical value between 0 to 1.Certainly, it is bright except above-mentioned determination pixel
Outside the mode of angle value, other determination modes can be also taken, the present embodiment is not especially limited to this.
206th, according to the brightness value of each pixel, the average brightness in calculating Part II region.
N pixel is contained with Part II region, and in n pixel each pixel brightness value respectively with symbol
X1, x2, x3 ... xn are identified, and the average brightness in Part II region is identified with symbol x, then average brightness
207th, according to the brightness value and average brightness of each pixel, the light conditions of human face region are determined.
In embodiments of the present invention, according to the brightness value and average brightness of each pixel, the light of human face region is determined
According to the specific implementation of situation, comprise the following steps 207 (a) to step 207 (c):
207 (a), brightness value and average brightness according to each pixel, determine the brightness variance in Part II region
Value.
For the step, continue by taking the example in above-mentioned steps 206 as an example, with brightness of the symbol S to Part II region
Variance yields is identified, then according to the brightness value of each pixel and average brightness brightness variance yields, determines Part II area
During the brightness variance yields S in domain, following formula can be applied:
207 (b), when the brightness variance yields in Part II region is less than predetermined threshold value, human face region uniform illumination is determined.
Wherein, the big I of predetermined threshold value is 2 or 3 etc. numerical value, and the present embodiment is to the size of predetermined threshold value without tool
Body limits.The setting of predetermined threshold value can depend on the circumstances or by empirical value depending on.In probability theory and mathematical statistics, variance is used for
Measure stochastic variable and its mathematic expectaion(That is average value)Between departure degree, for the present invention, brightness variance yields
For measuring the departure degree in Part II region between the brightness value and average brightness of each pixel.So brightness side
Difference is smaller, shows that the departure degree in Part II region between the brightness value and average brightness of each pixel is smaller,
The luminance difference of each pixel is little i.e. in Part II region, and light conditions tend to be uniform.So in the embodiment of the present invention
In, by the magnitude relationship of brightness variance yields and predetermined threshold value come determine human face region whether uniform illumination.When Part II area
When the brightness variance yields in domain is less than predetermined threshold value, human face region uniform illumination is determined.It is determined that after human face region uniform illumination, just
Recognition of face, face tracking etc. directly can be carried out to human face region.And due to now human face region uniform illumination, so identification
As a result or tracking result is also more accurate.
207 (c), when the brightness variance yields in Part II region is more than predetermined threshold value, human face region uneven illumination is determined
It is even, illumination compensation process is carried out to human face region.
For the step, if the brightness variance yields in Part II region is more than predetermined threshold value, illustrate Part II region
In each pixel brightness value and average brightness departure degree it is larger, i.e., luminance difference between each pixel compared with
Greatly, so determining that human face region uneven illumination is even.Why human face region occurs the even situation of uneven illumination, is because may
There are following situations:From the first image of side shooting, because face belongs to three-dimensional body so may go out in During Illumination
Show shade, and then cause human face region uneven illumination even.For the situation that human face region uneven illumination is even, if directly to face
Region carries out the processing such as recognition of face or face tracking, then the result obtained there may be very big discrepancy, or at all
It cannot get result.So in order to avoid the appearance of the above situation, recognition of face or face tracking are being carried out to human face region
Before processing, also need to carry out illumination compensation process to human face region.For example histogram equalization is carried out respectively to human face region
And exponential transform, the human face region after the two is handled are weighted fusion, so as to realize the illumination pretreatment to human face region,
In favor of the recognition of face of next step or face tracking processing etc..
Method provided in an embodiment of the present invention, behind the target area in detecting the first image, target area is carried out
Texture filtering processing, obtains Part II region, and then illumination feelings of the brightness value based on Part II region to target area
Condition is judged, because Part II region is smooth region, the situation that height rises and falls is not present in region, so to illumination feelings
The judgement of condition is more accurate, improves judgement precision.
Fig. 3 is a kind of electronic equipment provided in an embodiment of the present invention, and referring to Fig. 3, the electronic equipment includes:First obtains mould
Block 301, detection module 302, processing module 303, the second acquisition module 304, determining module 305.
Wherein, the first acquisition module 301, for obtaining the first image, the first image includes target area;Detection module
302 are connected with the first acquisition module 301, for being detected to the first image, obtain target area;Processing module 303 and inspection
Survey module 302 to connect, for carrying out texture filtering processing to target area, obtain the Part I region in target area, the
A part of region is the region residing for the texture in target area;Second acquisition module 304 is connected with processing module 303, is used for
Part II region is obtained, Part II region is not overlapping with Part I region, Part II region and Part I region
Target area can be combined as;Determining module 305 is connected with the second acquisition module 304, for detecting the bright of Part II region
Angle value, the light conditions of target area are determined according to brightness value.
Alternatively, detection module, in the first image carry out target object search, determine target object position and
Size;According to the position of target object and size, target area is determined;Or, obtain the operation for being directed to the first image;According to operation
Sensing region, determine target area.
Optionally it is determined that module, including:
First determining unit, for determining the brightness value of each pixel in Part II region;
Computing unit, for the brightness value according to each pixel, the average brightness in calculating Part II region;
Second determining unit, for the brightness value and average brightness according to each pixel, determine the light of target area
According to situation.
Alternatively, the second determining unit, for the brightness value and average brightness according to each pixel, second is determined
Subregional brightness variance yields;When the brightness variance yields in Part II region is less than predetermined threshold value, target area illumination is determined
Uniformly.
Alternatively, second unit, it is additionally operable to, when the brightness variance yields in Part II region is more than predetermined threshold value, determine mesh
Area light is marked according to uneven, illumination compensation process is carried out to target area.
Alternatively, target area is human face region.
To sum up, electronic equipment provided in an embodiment of the present invention, behind the target area in detecting the first image, to target
Region carries out texture filtering processing, obtains Part II region, and then the brightness value based on Part II region is to target area
Light conditions judged, because Part II region is smooth region, the situation that height rises and falls is not present in region, so
Judgement to light conditions is more accurate, improves judgement precision.
It should be noted that:The electronic equipment that above-described embodiment provides is it is determined that during light conditions, only with above-mentioned each function
The division progress of module, can be as needed and by above-mentioned function distribution by different function moulds for example, in practical application
Block is completed, i.e., the internal structure of equipment is divided into different functional modules, to complete all or part of work(described above
Energy.In addition, the electronic equipment that above-described embodiment provides, with determining that the embodiment of the method for light conditions belongs to same design, its is specific
Implementation process refers to embodiment of the method, repeats no more here.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
One of ordinary skill in the art will appreciate that hardware can be passed through by realizing all or part of step of above-described embodiment
To complete, by program the hardware of correlation can also be instructed to complete, described program can be stored in a kind of computer-readable
In storage medium, storage medium mentioned above can be read-only storage, disk or CD etc..
The foregoing is only presently preferred embodiments of the present invention, be not intended to limit the invention, it is all the present invention spirit and
Within principle, any modification, equivalent substitution and improvements made etc., it should be included in the scope of the protection.
Claims (12)
- A kind of 1. method for determining light conditions, applied to electronic equipment, it is characterised in that methods described includes:The first image is obtained, described first image includes target area;Described first image is detected, obtains the target area;Texture filtering processing is carried out to the target area, obtains the Part I region in the target area, described first Subregion is the region residing for the texture in the target area;Part II region is obtained, the Part II region and the Part I region be not overlapping, the Part II area Domain can be combined as the target area with the Part I region;The brightness value in the Part II region is detected, the light conditions of the target area are determined according to the brightness value.
- 2. according to the method for claim 1, it is characterised in that it is described that described first image is detected, obtain target Region, including:Target object search is carried out in described first image, determines position and the size of the target object;According to the mesh Position and the size of object are marked, determines the target area;Or,Obtain the operation for described first image;According to the region of the sensing of the operation, the target area is determined.
- 3. according to the method for claim 1, it is characterised in that the brightness value in the detection Part II region, root The light conditions of the target area are determined according to the brightness value, including:Determine the brightness value of each pixel in the Part II region;According to the brightness value of each pixel, the average brightness in the Part II region is calculated;According to the brightness value of each pixel and the average brightness, the light conditions of the target area are determined.
- 4. according to the method for claim 3, it is characterised in that described according to each brightness value of pixel and described Average brightness, the light conditions of the target area are determined, including:According to the brightness value of each pixel and the average brightness, the brightness variance in the Part II region is determined Value;When the brightness variance yields in the Part II region is less than predetermined threshold value, the target area uniform illumination is determined.
- 5. according to the method for claim 4, it is characterised in that described according to each brightness value of pixel and described Average brightness, after the brightness variance yields for determining the Part II region, methods described also includes:When the brightness variance yields in the Part II region is more than the predetermined threshold value, the target area uneven illumination is determined It is even, illumination compensation process is carried out to the target area.
- 6. the method according to any claim in claim 1 to 5, it is characterised in that the target area is face Region.
- 7. a kind of electronic equipment, it is characterised in that the electronic equipment includes:First acquisition module, for obtaining the first image, described first image includes target area;Detection module, for being detected to described first image, obtain the target area;Processing module, for carrying out texture filtering processing to the target area, obtain the Part I in the target area Region, the Part I region are the region residing for the texture in the target area;Second acquisition module, for obtaining Part II region, the Part II region does not weigh with the Part I region Folded, the Part II region can be combined as the target area with the Part I region;Determining module, for detecting the brightness value in the Part II region, the target area is determined according to the brightness value Light conditions.
- 8. electronic equipment according to claim 7, it is characterised in that the detection module, in described first image Middle progress target object search, determine position and the size of the target object;According to the position of the target object and size, Determine the target area;Or, obtain the operation for being directed to described first image;According to the region of the sensing of the operation, it is determined that The target area.
- 9. electronic equipment according to claim 7, it is characterised in that the determining module, including:First determining unit, for determining the brightness value of each pixel in the Part II region;Computing unit, for the brightness value according to each pixel, calculate the average brightness in the Part II region;Second determining unit, for the brightness value according to each pixel and the average brightness, determine the target The light conditions in region.
- 10. electronic equipment according to claim 9, it is characterised in that second determining unit, for according to described every The brightness value of individual pixel and the average brightness, determine the brightness variance yields in the Part II region;When described second When the brightness variance yields of subregion is less than predetermined threshold value, the target area uniform illumination is determined.
- 11. electronic equipment according to claim 10, it is characterised in that second determining unit, be additionally operable to when described When the brightness variance yields in Part II region is more than the predetermined threshold value, determine that the target area uneven illumination is even, to described Target area carries out illumination compensation process.
- 12. the electronic equipment according to any claim in claim 7 to 11, it is characterised in that the target area For human face region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410108102.5A CN104933696B (en) | 2014-03-21 | 2014-03-21 | Determine the method and electronic equipment of light conditions |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410108102.5A CN104933696B (en) | 2014-03-21 | 2014-03-21 | Determine the method and electronic equipment of light conditions |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104933696A CN104933696A (en) | 2015-09-23 |
CN104933696B true CN104933696B (en) | 2017-12-29 |
Family
ID=54120851
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410108102.5A Active CN104933696B (en) | 2014-03-21 | 2014-03-21 | Determine the method and electronic equipment of light conditions |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104933696B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109784285A (en) * | 2019-01-21 | 2019-05-21 | 深圳市云眸科技有限公司 | Realize method and device, the electronic equipment, storage medium of recognition of face |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101620667A (en) * | 2008-07-03 | 2010-01-06 | 深圳市康贝尔智能技术有限公司 | Processing method for eliminating illumination unevenness of face image |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN102509077A (en) * | 2011-10-28 | 2012-06-20 | 江苏物联网研究发展中心 | Target identification method based on automatic illumination evaluation |
US8477234B2 (en) * | 2009-07-23 | 2013-07-02 | Panasonic Electric Works Co., Ltd. | Brightness sensing system and illumination system using the same |
-
2014
- 2014-03-21 CN CN201410108102.5A patent/CN104933696B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101620667A (en) * | 2008-07-03 | 2010-01-06 | 深圳市康贝尔智能技术有限公司 | Processing method for eliminating illumination unevenness of face image |
US8477234B2 (en) * | 2009-07-23 | 2013-07-02 | Panasonic Electric Works Co., Ltd. | Brightness sensing system and illumination system using the same |
CN102333233A (en) * | 2011-09-23 | 2012-01-25 | 宁波大学 | Stereo image quality objective evaluation method based on visual perception |
CN102509077A (en) * | 2011-10-28 | 2012-06-20 | 江苏物联网研究发展中心 | Target identification method based on automatic illumination evaluation |
Non-Patent Citations (1)
Title |
---|
光照不均人脸图像的子空间人脸识别方法研究;柳露艳 等;《电视技术》;20140815;第38卷(第15期);第217-221页 * |
Also Published As
Publication number | Publication date |
---|---|
CN104933696A (en) | 2015-09-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108898047B (en) | Pedestrian detection method and system based on blocking and shielding perception | |
CN104408460B (en) | A kind of lane detection and tracking detection method | |
CN105279372B (en) | A kind of method and apparatus of determining depth of building | |
CN107798685B (en) | Pedestrian's height determines method, apparatus and system | |
CN109376667A (en) | Object detection method, device and electronic equipment | |
CN109670452A (en) | Method for detecting human face, device, electronic equipment and Face datection model | |
CN109472822A (en) | Dimension of object measurement method based on depth image processing | |
CN104173054B (en) | Measuring method and measuring device for height of human body based on binocular vision technique | |
CN105812746B (en) | A kind of object detection method and system | |
CN109284674A (en) | A kind of method and device of determining lane line | |
CN105279772B (en) | A kind of trackability method of discrimination of infrared sequence image | |
CN108416771A (en) | A kind of metal material corrosion area detection method based on monocular camera | |
CN108090896B (en) | Wood board flatness detection and machine learning method and device and electronic equipment | |
CN104463869B (en) | A kind of video flame image composite identification method | |
CN109670503A (en) | Label detection method, apparatus and electronic system | |
CN107220603A (en) | Vehicle checking method and device based on deep learning | |
CN104574393A (en) | Three-dimensional pavement crack image generation system and method | |
CN106404682A (en) | Soil color recognition method | |
CN105139384B (en) | The method and apparatus of defect capsule detection | |
CN106530271A (en) | Infrared image significance detection method | |
CN105469427B (en) | One kind is for method for tracking target in video | |
CN110349216A (en) | Container method for detecting position and device | |
CN106570888A (en) | Target tracking method based on FAST (Features from Accelerated Segment Test) corner point and pyramid KLT (Kanade-Lucas-Tomasi) | |
CN107764233A (en) | A kind of measuring method and device | |
CN104574312A (en) | Method and device of calculating center of circle for target image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |