CN110111281A - Image processing method and device, electronic equipment and storage medium - Google Patents
Image processing method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN110111281A CN110111281A CN201910381154.2A CN201910381154A CN110111281A CN 110111281 A CN110111281 A CN 110111281A CN 201910381154 A CN201910381154 A CN 201910381154A CN 110111281 A CN110111281 A CN 110111281A
- Authority
- CN
- China
- Prior art keywords
- image
- processed
- segmentation result
- luminous environment
- target image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 26
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000000605 extraction Methods 0.000 claims abstract description 23
- 238000012545 processing Methods 0.000 claims description 83
- 230000011218 segmentation Effects 0.000 claims description 71
- 238000000034 method Methods 0.000 claims description 55
- 238000004590 computer program Methods 0.000 claims description 11
- 230000005055 memory storage Effects 0.000 claims 1
- 238000004364 calculation method Methods 0.000 abstract description 7
- 230000008569 process Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 19
- 230000001815 facial effect Effects 0.000 description 15
- 230000006870 function Effects 0.000 description 9
- 230000008859 change Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000010276 construction Methods 0.000 description 3
- 238000005520 cutting process Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 241001269238 Data Species 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000004040 coloring Methods 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 210000001508 eye Anatomy 0.000 description 1
- 210000004709 eyebrow Anatomy 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Image Processing (AREA)
Abstract
This disclosure relates to a kind of image processing method and device, electronic equipment and storage medium, which comprises obtain image to be processed;Objective extraction is carried out to the image to be processed, obtains target image;Pixels statistics are carried out to the target image, according to the pixels statistics as a result, luminous environment where determining the image to be processed.The embodiment of the present disclosure can determine the luminous environment situation of image to be processed, to promote the speed and precision of image procossing with lesser calculation amount relatively simplely.
Description
Technical field
This disclosure relates to technical field of image processing more particularly to a kind of image processing method and device, electronic equipment and
Storage medium.
Background technique
In image technique field, luminous environment locating for image is determined, be the basis for carrying out image analysis and processing.To image
Carrying out luminous environment analysis is vital for many image processing process.
Summary of the invention
The present disclosure proposes a kind of image processing techniques schemes.
According to the one side of the disclosure, a kind of image processing method is provided, comprising: obtain image to be processed;To described
Image to be processed carries out Objective extraction, obtains target image;Pixels statistics are carried out to the target image, are united according to the pixel
Meter as a result, determining the luminous environment where the image to be processed.
The luminous environment of image to be processed can be determined relatively simplely by embodiment disclosed above with lesser calculation amount
Situation, to promote the speed and precision of image procossing.
In one possible implementation, described that Objective extraction is carried out to the image to be processed, target image is obtained,
Include: that processing is split to the image to be processed, obtain segmentation result, using the segmentation result as the target figure
Picture.
By embodiment disclosed above, target image region can be effectively extracted from image to be processed, greatly
Where reducing subsequent determination image to be processed greatly when luminous environment, the quantity of valid data to be treated, to promote entire figure
As the speed and efficiency for the treatment of process.
In one possible implementation, described that Objective extraction is carried out to the image to be processed, target image is obtained,
Further include: size adjusting is carried out to the segmentation result, using the segmentation result after size adjusting as the target image.
By carrying out size adjusting to segmentation result, area unrelated with target object in segmentation result can be further reduced
Domain, to be further reduced the data volume of required calculating in subsequent image treatment process, further promoted image processing speed and
Efficiency.
In one possible implementation, size adjusting is carried out to the segmentation result, comprising: to the segmentation result
It is cut, and/or, reduce the resolution ratio of the segmentation result.
By cutting to segmentation result, and reduce the resolution ratio equidimension adjustment mode of segmentation result, Ke Yigeng
It is further to reduce the amount of pixels for including in target image, to be further reduced calculating needed for subsequent image treatment process
Amount, to further increase image processing speed and efficiency.
In one possible implementation, described that pixels statistics are carried out to the target image, it is united according to the pixel
Meter as a result, determining the luminous environment where the image to be processed, comprising: count the ratio of dark pixel in the target image
Example, wherein the dark pixel includes the Color Channel that at least one channel value is less than channel threshold value;According to the dark
The ratio of pixel determines the luminous environment where the image to be processed.
Embodiment disclosed above more can easily determine the luminous environment where image to be processed, guarantee to confirm result
While accuracy, the speed of confirmation can also be promoted, to promote image procossing while guaranteeing image procossing precision
Efficiency.
In one possible implementation, the ratio according to the dark pixel, determines the figure to be processed
As the luminous environment at place, comprising: when the ratio of the dark pixel is lower than the first proportion threshold value, determine the figure to be processed
As the luminous environment at place is backlight environment;When the ratio of the dark pixel is higher than the second proportion threshold value, determine it is described to
Luminous environment where handling image is half-light environment.
By the first proportion threshold value and the second proportion threshold value, the luminous environment where image to be processed can be divided into three kinds
Situation, i.e. backlight environment, half-light environment and ordinary light environment, so as to by the division throughput of luminous environment where image to be processed
Change standard is realized, can promote the objectivity and accuracy of division result in this way.
In one possible implementation, the method also includes: the image to be processed is carried out and the ring of light
The adaptable processing in border.
The processing being adapted with luminous environment is carried out by the image to be processed to luminous environment where having determined, it can be correspondingly
Reduce influence of the luminous environment where image to be processed to parameters such as the quality of image to be processed, so as to effectively promoted to
Handle the quality and effect of image.
In one possible implementation, described that the place being adapted with the luminous environment is carried out to the image to be processed
Reason, comprising: when the luminous environment where the image to be processed is backlight environment or half-light environment, change the image to be processed
Brightness.
By changing the bright of image to be processed when the luminous environment where image to be processed is backlight environment or half-light environment
Degree, can effectively promote the quality of image to be processed, to improve the quality and precision of image procossing.
In one possible implementation, the image to be processed is the image comprising human face region, the target figure
As being facial image.
By will include human face region image as image to be processed, facial image, can be effective as target image
The quality for promoting face image processing, to promote the practicability of image processing method.
According to the one side of the disclosure, a kind of image processing apparatus is provided, comprising: module is obtained, for obtaining wait locate
Manage image;Object extraction module obtains target image for carrying out Objective extraction to the image to be processed;First processing mould
Block, for carrying out pixels statistics to the target image, according to the pixels statistics as a result, determining the image institute to be processed
Luminous environment.
In one possible implementation, the object extraction module is used for: being split to the image to be processed
Processing, obtains segmentation result, using the segmentation result as the target image.
In one possible implementation, the object extraction module is also used to: carrying out size to the segmentation result
Adjustment, using the segmentation result after size adjusting as the target image.
In one possible implementation, the object extraction module is further used for: carrying out to the segmentation result
It cuts, and/or, reduce the resolution ratio of the segmentation result.
In one possible implementation, the first processing module is used for: counting dark in the target image
The ratio of pixel, wherein the dark pixel includes the Color Channel that at least one channel value is less than channel threshold value;According to institute
The ratio for stating dark pixel determines the luminous environment where the image to be processed.
In one possible implementation, the first processing module is further used for: in the dark pixel
When ratio is lower than the first proportion threshold value, the luminous environment where determining the image to be processed is backlight environment;In the dark
When the ratio of pixel is higher than the second proportion threshold value, the luminous environment where determining the image to be processed is half-light environment.
In one possible implementation, described device further includes Second processing module, and the Second processing module is used
In: the processing being adapted with the luminous environment is carried out to the image to be processed.
In one possible implementation, the Second processing module is further used for: in the image institute to be processed
Luminous environment be backlight environment or half-light environment when, change the brightness of the image to be processed.
In one possible implementation, the image to be processed is the image comprising human face region, the target figure
As being facial image.
According to the one side of the disclosure, a kind of electronic equipment is provided, comprising:
Processor;
Memory for storage processor executable instruction;
Wherein, the processor is configured to: execute above-mentioned image processing method.
According to the one side of the disclosure, a kind of computer readable storage medium is provided, computer program is stored thereon with
Instruction, the computer program instructions realize above-mentioned image processing method when being executed by processor.
In the embodiments of the present disclosure, by carrying out Objective extraction to image to be processed, target image is obtained;Again to target figure
As carrying out pixels statistics, according to pixels statistics as a result, the luminous environment of target image can be determined, image to be processed is then determined
The luminous environment at place.The luminous environment of image to be processed can be determined relatively simplely by the above process with lesser calculation amount
Situation, to promote the speed and precision of image procossing.
It should be understood that above general description and following detailed description is only exemplary and explanatory, rather than
Limit the disclosure.According to below with reference to the accompanying drawings to detailed description of illustrative embodiments, the other feature and aspect of the disclosure will
It becomes apparent.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and those figures show meet this public affairs
The embodiment opened, and together with specification it is used to illustrate the technical solution of the disclosure.
Fig. 1 shows the flow chart of the image processing method according to one embodiment of the disclosure.
Fig. 2 shows the flow charts according to the image processing method of one embodiment of the disclosure.
Fig. 3, which is shown, applies exemplary schematic diagram according to the disclosure one.
Fig. 4, which is shown, applies exemplary schematic diagram according to the disclosure one.
Fig. 5 shows the block diagram of the image processing apparatus according to one embodiment of the disclosure.
Fig. 6 shows the block diagram of a kind of electronic equipment according to the embodiment of the present disclosure.
Fig. 7 shows the block diagram of a kind of electronic equipment according to the embodiment of the present disclosure.
Specific embodiment
Various exemplary embodiments, feature and the aspect of the disclosure are described in detail below with reference to attached drawing.It is identical in attached drawing
Appended drawing reference indicate element functionally identical or similar.Although the various aspects of embodiment are shown in the attached drawings, remove
It non-specifically points out, it is not necessary to attached drawing drawn to scale.
Dedicated word " exemplary " means " being used as example, embodiment or illustrative " herein.Here as " exemplary "
Illustrated any embodiment should not necessarily be construed as preferred or advantageous over other embodiments.
The terms "and/or", only a kind of incidence relation for describing affiliated partner, indicates that there may be three kinds of passes
System, for example, A and/or B, can indicate: individualism A exists simultaneously A and B, these three situations of individualism B.In addition, herein
Middle term "at least one" indicate a variety of in any one or more at least two any combination, it may for example comprise A,
B, at least one of C can indicate to include any one or more elements selected from the set that A, B and C are constituted.
In addition, giving numerous details in specific embodiment below in order to which the disclosure is better described.
It will be appreciated by those skilled in the art that without certain details, the disclosure equally be can be implemented.In some instances, for
Method, means, element and circuit well known to those skilled in the art are not described in detail, in order to highlight the purport of the disclosure.
Fig. 1 shows the flow chart of the image processing method according to one embodiment of the disclosure, and this method can be applied to image
Processing unit, image processing apparatus can be terminal device, server or other processing equipments etc..Wherein, terminal device can
Think user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, wireless phone, individual
Digital processing (Personal Digital Assistant, PDA), calculates equipment, mobile unit, wearable sets handheld device
It is standby etc..
In some possible implementations, which can be called in memory by processor and be stored
The mode of computer-readable instruction is realized.
As shown in Figure 1, described image processing method may include:
Step S11 obtains image to be processed.
Step S12 carries out Objective extraction to image to be processed, obtains target image.
Step S13 carries out pixels statistics to target image, according to pixels statistics as a result, determining image place to be processed
Luminous environment.
The image processing method of the embodiment of the present disclosure obtains mesh by carrying out Objective extraction to the image to be processed of acquisition
Logo image, then pixels statistics are carried out based on this target image, according to pixels statistics as a result, light where determining image to be processed
Environment can carry out respective handling based on target image by this process, to determine the ring of light where entire image to be processed
Border, thus the data volume handled needed for greatly reducing determining luminous environment process, while guaranteeing definitive result, promotion was determined
The efficiency of journey;Meanwhile the luminous environment of image to be processed is determined by carrying out pixels statistics to target image, this process is more
It is convenient, and statistical result more can objectively react the luminous environment where image, therefore can be further improved image procossing
The efficiency and precision of process.
Wherein, the image processing method of the embodiment of the present disclosure can be applied to the processing of the image comprising human face region, example
Such as, to contain the target area in the image of human face region, which can be face, be also possible to the portion on face
Divide privileged site, such as lip, eye or the bridge of the nose.In one possible implementation, image to be processed, which can be, includes
The image of human face region, target image can be facial image, in one example, the image processing method of the embodiment of the present disclosure
It can be applied to during the makeups of facial image, in one example, which is applied to lip in facial image
The makeups process in portion.
It should be noted that the image processing method of the embodiment of the present disclosure is not limited to apply in the image comprising human face region
Processing, can be applied to arbitrary image procossing, the disclosure is not construed as limiting this.
In one possible implementation, the quantity of image to be processed is also not limited, and may include single picture,
It may include plurality of pictures, one or more target objects can be identified according to the plurality of pictures.
The implementation of step S11 is not limited, and the mode of any available image to be processed can be used as step
The implementation of S11 is not limited by following open embodiments.In one possible implementation, can by read or
Received mode obtains image to be processed;In one possible implementation, it can be shot by active or other are actively obtained
The mode taken obtains image to be processed.
After obtaining image to be processed by step S11, target can be carried out to image to be processed by step S12 and mentioned
It takes, obtains target image.The implementation of step S12 is not limited, and in one possible implementation, step S12 can be with
Include:
Processing is split to image to be processed, segmentation result is obtained, using segmentation result as target image.Dividing processing
Implementation be not limited, the mode of any image that target region can be partitioned into from image to be processed,
Using the implementation as dividing processing.In one possible implementation, dividing processing can be through neural network pair
Image to be processed carries out the identification and extraction of target area, is to include human face region in image to be processed in one example
Image, when target image is facial image, the mode of dividing processing can be the convolution that image to be processed is passed through recognition of face
Neural network, using output result as target image.In one possible implementation, dividing processing is also possible to treat place
Reason image carries out target detection and obtains specific coordinate of the target image region in image to be processed according to testing result,
It may then based on this specific coordinate, image to be processed cut, obtain target image, in one example, wait locate
Reason image is the image comprising human face region, and when target image is facial image, the mode of dividing processing be can be to be processed
Image carries out Face datection and is then based on this coordinate to obtain the specific coordinate of human face region in image to be processed, treats place
Reason image is cut, and obtains facial image as target image.
By being split processing to image to be processed, segmentation result is obtained, and using segmentation result as target image, this
Kind mode can effectively extract target image region from image to be processed, greatly reduce subsequent determination figure to be processed
When as place luminous environment, the quantity of valid data to be treated, to promote the speed and efficiency of whole image treatment process.
It, can will be directly by figure to be processed by the above process as can be seen that in one possible implementation
Proper segmentation result is handled as target image as being split;It in one possible implementation, can also be to segmentation
As a result it is further processed to obtain final target image.Therefore, in one possible implementation, step S12
Can also include:
Size adjusting is carried out to segmentation result, using the segmentation result after size adjusting as target image.
By carrying out size adjusting to segmentation result, area unrelated with target object in segmentation result can be further reduced
Domain, to be further reduced the data volume of required calculating in subsequent image treatment process, further promoted image processing speed and
Efficiency.
The specific implementation of size adjusting is carried out to segmentation result, it is same unrestricted, in a kind of possible realization side
In formula, size adjusting is carried out to segmentation result, may include:
Segmentation result is cut, and/or, reduce the resolution ratio of segmentation result.
It, can by the above process as can be seen that in one possible implementation, carrying out size adjusting to segmentation result
It is implemented separately in a manner of by being cut to segmentation result, the side to the resolution ratio for reducing segmentation result can also be passed through
Formula is implemented separately, can also be common by the way that the resolution ratio of segmentation result both modes are cut and reduced to segmentation result
The mode of progress realizes that in the resolution ratio by the way that segmentation result is cut and reduced to segmentation result both modes are common
The mode of progress when realizing, is cut to segmentation result and is reduced the elder generation of both processing modes of the resolution ratio of segmentation result
Order is not limited afterwards, can be the resolution ratio for first being cut to segmentation result and reducing segmentation result again, is also possible to first drop
The resolution ratio of low segmentation result again cuts segmentation result.Other than above two mode, others can also be passed through
Mode carries out size adjusting to segmentation result and is selected according to the actual situation it is not limited here.
In above process, the practical implementation cut to segmentation result is also not limited, a kind of possible
It in implementation, can be the dimensional parameters by goal-selling image, then join segmentation result according to this preset size
Number is cut, and obtains cutting result.The size parameter values of preset target image are not limited, can be clever according to the actual situation
Selection living, it is highly 160 pixel values that in one example, preset dimensional parameters, which can be that width is 90 pixel values,.In one kind
In possible implementation, it is also possible to carry out further object identification to segmentation result, may then pass through further
Target identification as a result, determine segmentation result in do not include target object background area position coordinates, be based on background area
Position coordinates, segmentation result can further be cut, obtain cut result.
Similarly, the mode for reducing the resolution ratio of segmentation result is equally not limited, any to reduce segmentation result
The mode of resolution ratio can be used as its implementation, can flexible choice according to the actual situation, in a kind of possible realization side
In formula, the mode for reducing the resolution ratio of segmentation result can be with are as follows: and it is down-sampled by being carried out to segmentation result, obtain low resolution
Image, the result as the resolution ratio for reducing segmentation result.Specifically which kind of degree is the resolution ratio of segmentation result is reduced to, herein
It is equally not limited, can be set according to the actual situation, do not do limitation numerically herein.
By cutting to segmentation result, and reduce the resolution ratio equidimension adjustment mode of segmentation result, Ke Yigeng
It is further to reduce the amount of pixels for including in target image, to be further reduced calculating needed for subsequent image treatment process
Amount, to further increase image processing speed and efficiency.
After having obtained target image, pixels statistics can be carried out to target image by step S13, be then based on this picture
The result of element statistics determines luminous environment locating for image to be processed.The specific implementation of step S13 is equally not limited, any
Step can be can be used as by way of determining luminous environment locating for image to be processed to target image progress pixels statistics
The implementation of S13.Fig. 2 shows the flow charts according to the image processing method of one embodiment of the disclosure, as shown, in one kind
In possible implementation, step S13 may include:
Step S131 counts the ratio of dark pixel in target image, wherein dark pixel includes that at least one is logical
Road value is less than the Color Channel of channel threshold value.
Step S132 determines the luminous environment where image to be processed according to the ratio of dark pixel.
In above-mentioned steps, dark pixel includes the Color Channel that at least one channel value is less than channel threshold value, i.e., to mesh
For pixel in logo image, if the channel value of any one Color Channel of a certain pixel is less than channel threshold value, it can incite somebody to action
This pixel sees dark pixel as.In target image pixel specifically include how many Color Channel it is not limited here,
It is determined according to the actual conditions of image to be processed.No matter the pixel in target image is logical comprising how many color
Road, in the embodiments of the present disclosure, as long as channel value is lower than channel threshold value there are a Color Channel in the pixel, then this
Pixel can be looked at as dark pixel in the embodiments of the present disclosure, in other words, if the pixel in target image, institute
The channel value of some Color Channels is not less than channel threshold value, then the pixel can be looked at as non-dark pixel channel.Channel
The numerical value of threshold value is also not limited, and can carry out flexible choice according to the actual situation, and in one example, channel threshold value can be
100。
It should be noted that judging whether the pixel in target image is dark pixel, and judgment criteria is not limited to
A kind of above-mentioned way of realization, those skilled in the art flexibly can be expanded and be corrected according to the actual situation.For example, exist
In one example, judge that the standard of dark pixel can be with are as follows: for comprising R, G, B pixel of totally three Color Channels, in R
The channel value in channel is less than first passage threshold value, the channel value in the channel G is less than second channel threshold value and the channel value of channel B is less than
When third channel threshold value as, this pixel can just be seen to dark pixel, wherein first passage threshold value, second channel threshold value
It can be determined according to the actual situation with the specific value of third channel threshold value.It can be seen that by this open embodiment
The judgment criteria of dark pixel can be expanded flexibly according to the actual situation.
By above-mentioned judgment criteria, can determine whether the pixel in target image is dark pixel, it then can be true
Set the goal ratio shared by dark pixel in image.According to ratio shared by this dark pixel, can then determine to
Handle the luminous environment where image.Aforesaid way more can easily determine the luminous environment where image to be processed, guarantee
While confirming result accuracy, the speed of confirmation can also be promoted, to promote figure while guaranteeing image procossing precision
As the efficiency of processing.
The implementation of step S132 is equally not limited, i.e., how according to the ratio-dependent figure to be processed of dark pixel
As the luminous environment at place, implementation is not particularly limited, can be according to the actual conditions spirit of image and related shooting environmental
Selection living determines.In one possible implementation, step S132 may include:
When the ratio of dark pixel is lower than the first proportion threshold value, the luminous environment where determining image to be processed is backlight
Environment;
When the ratio of dark pixel is higher than the second proportion threshold value, the luminous environment where determining image to be processed is half-light
Environment.
In the above process, the specific value of the first proportion threshold value and the second proportion threshold value is equally not limited, also according to
Actual conditions flexible choice is not limited to following open examples, in one possible implementation, according to statistics, for inverse
Picture under light, the brightness of pixel is higher in target image, thus ratio shared by the obtained dark pixel of statistics close to
0, for the picture under half-light, the brightness of pixel is lower in target image, therefore counts ratio shared by obtained dark pixel
Example is based on above-mentioned statistical result close to 1, and in one example, the first proportion threshold value can be 0.01;In one example,
Second proportion threshold value can be 0.95.
By the first proportion threshold value and the second proportion threshold value, the luminous environment where image to be processed can be divided into three kinds
Situation, i.e. backlight environment, half-light environment and ordinary light environment, so as to by the division throughput of luminous environment where image to be processed
Change standard is realized, can promote the objectivity and accuracy of division result in this way.In actual operation, other can also be passed through
Proportion threshold value the luminous environment where image to be processed is divided into more or fewer types, according to the application of image to be processed
Demand and the actual conditions of image carry out flexible choice.
In one possible implementation, the embodiment of the present disclosure propose image processing method, can also include:
The processing being adapted with luminous environment is carried out to image to be processed.
The processing being adapted with luminous environment is carried out by the image to be processed to luminous environment where having determined, it can be correspondingly
Reduce influence of the luminous environment where image to be processed to parameters such as the quality of image to be processed, so as to effectively promoted to
Handle the quality and effect of image.
The processing which kind of and luminous environment are adapted specifically is carried out to image to be processed, depending on image to be processed classification and
The case where place luminous environment, can flexible choice according to the actual situation, be not limited to following embodiments.In a kind of possible realization
In mode, the processing being adapted with luminous environment is carried out to image to be processed, may include:
When the luminous environment where image to be processed is backlight environment or half-light environment, the brightness of image to be processed is changed.
In the above process, when the luminous environment where image to be processed is backlight environment or half-light environment, change to be processed
How the brightness of image specifically changes the brightness of image to be processed, equally can according to the practical type of image to be processed come into
Row determines.In one example, image to be processed can be the image comprising human face region, if passing through above-mentioned any disclosure at this time
The process of embodiment determines that the luminous environment where the image to be processed is half-light environment or backlight environment, at this time in order to be promoted wait locate
Manage the really degree of image, it may be considered that reducing subregional brightness in the middle part of image to be processed can reduce in one example
The brightness of lip region in image to be processed, to promote the really degree of entire facial image.In one example, to be processed
Image can be the scene image comprising a certain target construction, if being determined at this time by the process of above-mentioned any open embodiment
When luminous environment where the image to be processed is half-light environment or backlight environment, at this time in order to promote the mark of target construction
Property, it may be considered that promote the brightness of target construction in image to be processed.It can be seen that by above-mentioned each open embodiment dark
The brightness of image to be processed specifically how is changed when luminous environment or backlight environment, can according to the type of image and actual conditions into
Row determines, in some cases, the bright of different change images to be processed can also be taken in half-light environment and backlight environment
The mode of degree can be made a concrete analysis of and be determined according to the actual situation.
Application Scenarios-Example
Carrying out makeups to facial image is one of current face image processing main way.However due to the bat of image
Taking the photograph environment and style of shooting, there may be differences, to may therefore be in different luminous environments in the facial image of makeups, if
Makeups are carried out using same way for the facial image in different luminous environments, are easy so that finally obtained makeups image
As a result quality and authenticity is all unable to reach demand.
Therefore, one can determine that the image procossing mode of image place to be processed luminous environment being capable of significant increase makeups figure
The quality of picture, so as to expand the quality and application range of the image processing method.
Fig. 3~Fig. 4, which is shown, applies exemplary schematic diagram according to the disclosure one, as shown, the embodiment of the present disclosure proposes
A kind of image processing method, the detailed process of this processing method can be with are as follows:
Face datection is carried out to received image to be processed first, determines the specific seat of human face region in image to be processed
Mark is based on this coordinate, can cut to image to be processed, obtain the image of human face region, obtained human face region image
It is specific as shown in Figure 3.
After having obtained the image of human face region, can the image to the human face region carry out the diminution of further range,
To reduce influence of the pixel unrelated with human face region to subsequent image treatment process, in the embodiments of the present disclosure, before diminution
After the image of human face region is as shown in figure 3, carry out range shorter to it, the image of the human face region after obtained diminution such as Fig. 4
It is shown.
After the image of the human face region after being reduced, the resolution ratio of the image can also be further decreased, to subtract
The pixel quantity for including in few image, to reduce the calculation amount of subsequent image processing.It in the embodiments of the present disclosure, can be by people
The image in region reduces that resolution ratio to width is 90 pixel values and height is the size of 160 pixel values, and by the figure of this size
As the target image as final process.
After having obtained target image, pixels statistics can be carried out to the target image, to determine image place to be processed
Luminous environment the main foundation of the luminous environment where image to be processed is determined by pixels statistics in the embodiments of the present disclosure are as follows:
For an image, pixel wherein included can be divided the type of pixel according to the case where pixel access value,
For example, a type in pixel is dark pixel, and so-called dark pixel refers at least one color of the pixel
Channel (such as channel R, the channel G or channel B) has lower value compared with set threshold value.Face under ordinary ray is come
It says, generally can include a certain proportion of dark pixel, such as eyebrow, eyeball, nostril, corners of the mouth place can include certain on face
The dark pixel of quantity;For the face under backlight, the brightness of usual human face region pixel is higher, and what is counted helps secretly
The ratio of road pixel is close to 0;And for the face under half-light, the brightness of usual human face region pixel is lower, counts
Dark pixel ratio close to 1.Therefore, according to this foundation, image place to be processed can be judged by pixels statistics
Luminous environment.In the embodiments of the present disclosure, the pixel of target image obtained above can be counted, obtains dark picture
Plain proportion, detailed process are, for detection pixel, if the value of any one Color Channel of the detection pixel is less than 100
(value in normal color channel should be between 0-255), so that it may determine the pixel for dark pixel;If dark in target image
Channel pixel proportion then can be determined that the image to be processed is under backlight environment less than 0.01;If dark in target image
Channel pixel proportion is greater than 0.95, then can be determined that the image to be processed is under half-light environment.
After judging that the image to be processed is under backlight or half-light environment, during makeups, it can adjust accordingly
The mode of makeups.In the embodiments of the present disclosure, if the image to be processed is in backlight or half-light environment, it is contemplated that wherein
Lip portion carry out colouring makeups during, turn down lip portion colouring brightness, to promote the nature of whole image
Degree.The makeups mode at other positions of adjustment that can also be adapted, will not enumerate herein.
It should be noted that the image processing method of the embodiment of the present disclosure was not limited to apply above-mentioned comprising human face region
Image procossing is also not limited to the above-mentioned process for carrying out makeups processing to facial image, can be applied to arbitrary image procossing,
The disclosure is not construed as limiting this.
It is appreciated that above-mentioned each embodiment of the method that the disclosure refers to, without prejudice to principle logic,
To engage one another while the embodiment to be formed after combining, as space is limited, the disclosure is repeated no more.
It will be understood by those skilled in the art that each step writes sequence simultaneously in the above method of specific embodiment
It does not mean that stringent execution sequence and any restriction is constituted to implementation process, the specific execution sequence of each step should be with its function
It can be determined with possible internal logic.
Fig. 5 shows the block diagram of the image processing apparatus according to the embodiment of the present disclosure.The image processing apparatus can be terminal
Equipment, server or other processing equipments etc..Wherein, terminal device can for user equipment (User Equipment, UE),
Mobile device, user terminal, terminal, cellular phone, wireless phone, personal digital assistant (Personal Digital
Assistant, PDA), handheld device, calculate equipment, mobile unit, wearable device etc..
In some possible implementations, which can be called in memory by processor and be stored
The mode of computer-readable instruction is realized.
As shown in figure 5, described image processing unit may include:
Module 21 is obtained, for obtaining image to be processed;
Object extraction module 22 obtains target image for carrying out Objective extraction to image to be processed;
First processing module 23, for carrying out pixels statistics to target image, according to pixels statistics as a result, determining wait locate
Manage the luminous environment where image.
In one possible implementation, object extraction module is used for: processing is split to the image to be processed,
Segmentation result is obtained, using the segmentation result as the target image.
In one possible implementation, object extraction module is also used to: size adjusting is carried out to the segmentation result,
Using the segmentation result after size adjusting as the target image.
In one possible implementation, object extraction module is further used for: the segmentation result is cut,
And/or reduce the resolution ratio of the segmentation result.
In one possible implementation, first processing module is used for: counting dark pixel in the target image
Ratio, wherein the dark pixel include at least one channel value be less than channel threshold value Color Channel;According to described dark
The ratio of channel pixel determines the luminous environment where the image to be processed.
In one possible implementation, first processing module is further used for: in the ratio of the dark pixel
When lower than the first proportion threshold value, the luminous environment where determining the image to be processed is backlight environment;In the dark pixel
Ratio when being higher than the second proportion threshold value, determine that the luminous environment where the image to be processed is half-light environment.
In one possible implementation, device further includes Second processing module, and the Second processing module is used for: right
The image to be processed carries out the processing being adapted with the luminous environment.
In one possible implementation, Second processing module is further used for: where the image to be processed
When luminous environment is backlight environment or half-light environment, the brightness of the image to be processed is changed.
In one possible implementation, image to be processed is the image comprising human face region, and the target image is
Facial image.
The embodiment of the present disclosure also proposes a kind of computer readable storage medium, is stored thereon with computer program instructions, institute
It states when computer program instructions are executed by processor and realizes the above method.Computer readable storage medium can be non-volatile meter
Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure also proposes a kind of electronic equipment, comprising: processor;For storage processor executable instruction
Memory;Wherein, the processor is configured to the above method.
The equipment that electronic equipment may be provided as terminal, server or other forms.
Fig. 6 is the block diagram according to a kind of electronic equipment 800 of the embodiment of the present disclosure.For example, electronic equipment 800 can be shifting
Mobile phone, computer, digital broadcasting terminal, messaging device, game console, tablet device, Medical Devices, body-building are set
It is standby, the terminals such as personal digital assistant.
Referring to Fig. 6, electronic equipment 800 may include following one or more components: processing component 802, memory 804,
Power supply module 806, multimedia component 808, audio component 810, the interface 812 of input/output (I/O), sensor module 814,
And communication component 816.
The integrated operation of the usual controlling electronic devices 800 of processing component 802, such as with display, call, data are logical
Letter, camera operation and record operate associated operation.Processing component 802 may include one or more processors 820 to hold
Row instruction, to perform all or part of the steps of the methods described above.In addition, processing component 802 may include one or more moulds
Block, convenient for the interaction between processing component 802 and other assemblies.For example, processing component 802 may include multi-media module, with
Facilitate the interaction between multimedia component 808 and processing component 802.
Memory 804 is configured as storing various types of data to support the operation in electronic equipment 800.These data
Example include any application or method for being operated on electronic equipment 800 instruction, contact data, telephone directory
Data, message, picture, video etc..Memory 804 can by any kind of volatibility or non-volatile memory device or it
Combination realize, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM) is erasable
Except programmable read only memory (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, fastly
Flash memory, disk or CD.
Power supply module 806 provides electric power for the various assemblies of electronic equipment 800.Power supply module 806 may include power supply pipe
Reason system, one or more power supplys and other with for electronic equipment 800 generate, manage, and distribute the associated component of electric power.
Multimedia component 808 includes the screen of one output interface of offer between the electronic equipment 800 and user.
In some embodiments, screen may include liquid crystal display (LCD) and touch panel (TP).If screen includes touch surface
Plate, screen may be implemented as touch screen, to receive input signal from the user.Touch panel includes one or more touches
Sensor is to sense the gesture on touch, slide, and touch panel.The touch sensor can not only sense touch or sliding
The boundary of movement, but also detect duration and pressure associated with the touch or slide operation.In some embodiments,
Multimedia component 808 includes a front camera and/or rear camera.When electronic equipment 800 is in operation mode, as clapped
When taking the photograph mode or video mode, front camera and/or rear camera can receive external multi-medium data.It is each preposition
Camera and rear camera can be a fixed optical lens system or have focusing and optical zoom capabilities.
Audio component 810 is configured as output and/or input audio signal.For example, audio component 810 includes a Mike
Wind (MIC), when electronic equipment 800 is in operation mode, when such as call mode, recording mode, and voice recognition mode, microphone
It is configured as receiving external audio signal.The received audio signal can be further stored in memory 804 or via logical
Believe that component 816 is sent.In some embodiments, audio component 810 further includes a loudspeaker, is used for output audio signal.
I/O interface 812 provides interface between processing component 802 and peripheral interface module, and above-mentioned peripheral interface module can
To be keyboard, click wheel, button etc..These buttons may include, but are not limited to: home button, volume button, start button and lock
Determine button.
Sensor module 814 includes one or more sensors, for providing the state of various aspects for electronic equipment 800
Assessment.For example, sensor module 814 can detecte the state that opens/closes of electronic equipment 800, the relative positioning of component, example
As the component be electronic equipment 800 display and keypad, sensor module 814 can also detect electronic equipment 800 or
The position change of 800 1 components of electronic equipment, the existence or non-existence that user contacts with electronic equipment 800, electronic equipment 800
The temperature change of orientation or acceleration/deceleration and electronic equipment 800.Sensor module 814 may include proximity sensor, be configured
For detecting the presence of nearby objects without any physical contact.Sensor module 814 can also include optical sensor,
Such as CMOS or ccd image sensor, for being used in imaging applications.In some embodiments, which may be used also
To include acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communication component 816 is configured to facilitate the communication of wired or wireless way between electronic equipment 800 and other equipment.
Electronic equipment 800 can access the wireless network based on communication standard, such as WiFi, 2G or 3G or their combination.Show at one
In example property embodiment, communication component 816 receives broadcast singal or broadcast from external broadcasting management system via broadcast channel
Relevant information.In one exemplary embodiment, the communication component 816 further includes near-field communication (NFC) module, short to promote
Cheng Tongxin.For example, radio frequency identification (RFID) technology, Infrared Data Association (IrDA) technology, ultra wide band can be based in NFC module
(UWB) technology, bluetooth (BT) technology and other technologies are realized.
In the exemplary embodiment, electronic equipment 800 can be by one or more application specific integrated circuit (ASIC), number
Word signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array
(FPGA), controller, microcontroller, microprocessor or other electronic components are realized, for executing the above method.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 804 of machine program instruction, above-mentioned computer program instructions can be executed by the processor 820 of electronic equipment 800 to complete
The above method.
Fig. 7 is the block diagram according to a kind of electronic equipment 1900 of the embodiment of the present disclosure.For example, electronic equipment 1900 can be by
It is provided as a server.Referring to Fig. 7, it further comprises one or more places that electronic equipment 1900, which includes processing component 1922,
Manage device and memory resource represented by a memory 1932, for store can by the instruction of the execution of processing component 1922,
Such as application program.The application program stored in memory 1932 may include it is one or more each correspond to one
The module of group instruction.In addition, processing component 1922 is configured as executing instruction, to execute the above method.
Electronic equipment 1900 can also include that a power supply module 1926 is configured as executing the power supply of electronic equipment 1900
Management, a wired or wireless network interface 1950 is configured as electronic equipment 1900 being connected to network and an input is defeated
(I/O) interface 1958 out.Electronic equipment 1900 can be operated based on the operating system for being stored in memory 1932, such as
Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or similar.
In the exemplary embodiment, a kind of non-volatile computer readable storage medium storing program for executing is additionally provided, for example including calculating
The memory 1932 of machine program instruction, above-mentioned computer program instructions can by the processing component 1922 of electronic equipment 1900 execute with
Complete the above method.
The disclosure can be system, method and/or computer program product.Computer program product may include computer
Readable storage medium storing program for executing, containing for making processor realize the computer-readable program instructions of various aspects of the disclosure.
Computer readable storage medium, which can be, can keep and store the tangible of the instruction used by instruction execution equipment
Equipment.Computer readable storage medium for example can be-- but it is not limited to-- storage device electric, magnetic storage apparatus, optical storage
Equipment, electric magnetic storage apparatus, semiconductor memory apparatus or above-mentioned any appropriate combination.Computer readable storage medium
More specific example (non exhaustive list) includes: portable computer diskette, hard disk, random access memory (RAM), read-only deposits
It is reservoir (ROM), erasable programmable read only memory (EPROM or flash memory), static random access memory (SRAM), portable
Compact disk read-only memory (CD-ROM), digital versatile disc (DVD), memory stick, floppy disk, mechanical coding equipment, for example thereon
It is stored with punch card or groove internal projection structure and the above-mentioned any appropriate combination of instruction.Calculating used herein above
Machine readable storage medium storing program for executing is not interpreted that instantaneous signal itself, the electromagnetic wave of such as radio wave or other Free propagations lead to
It crosses the electromagnetic wave (for example, the light pulse for passing through fiber optic cables) of waveguide or the propagation of other transmission mediums or is transmitted by electric wire
Electric signal.
Computer-readable program instructions as described herein can be downloaded to from computer readable storage medium it is each calculate/
Processing equipment, or outer computer or outer is downloaded to by network, such as internet, local area network, wide area network and/or wireless network
Portion stores equipment.Network may include copper transmission cable, optical fiber transmission, wireless transmission, router, firewall, interchanger, gateway
Computer and/or Edge Server.Adapter or network interface in each calculating/processing equipment are received from network to be counted
Calculation machine readable program instructions, and the computer-readable program instructions are forwarded, for the meter being stored in each calculating/processing equipment
In calculation machine readable storage medium storing program for executing.
Computer program instructions for executing disclosure operation can be assembly instruction, instruction set architecture (ISA) instructs,
Machine instruction, machine-dependent instructions, microcode, firmware instructions, condition setup data or with one or more programming languages
The source code or object code that any combination is write, the programming language include the programming language-of object-oriented such as
Smalltalk, C++ etc., and conventional procedural programming languages-such as " C " language or similar programming language.Computer
Readable program instructions can be executed fully on the user computer, partly execute on the user computer, be only as one
Vertical software package executes, part executes on the remote computer or completely in remote computer on the user computer for part
Or it is executed on server.In situations involving remote computers, remote computer can pass through network-packet of any kind
It includes local area network (LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as benefit
It is connected with ISP by internet).In some embodiments, by utilizing computer-readable program instructions
Status information carry out personalized customization electronic circuit, such as programmable logic circuit, field programmable gate array (FPGA) or can
Programmed logic array (PLA) (PLA), the electronic circuit can execute computer-readable program instructions, to realize each side of the disclosure
Face.
Referring herein to according to the flow chart of the method, apparatus (system) of the embodiment of the present disclosure and computer program product and/
Or block diagram describes various aspects of the disclosure.It should be appreciated that flowchart and or block diagram each box and flow chart and/
Or in block diagram each box combination, can be realized by computer-readable program instructions.
These computer-readable program instructions can be supplied to general purpose computer, special purpose computer or other programmable datas
The processor of processing unit, so that a kind of machine is produced, so that these instructions are passing through computer or other programmable datas
When the processor of processing unit executes, function specified in one or more boxes in implementation flow chart and/or block diagram is produced
The device of energy/movement.These computer-readable program instructions can also be stored in a computer-readable storage medium, these refer to
It enables so that computer, programmable data processing unit and/or other equipment work in a specific way, thus, it is stored with instruction
Computer-readable medium then includes a manufacture comprising in one or more boxes in implementation flow chart and/or block diagram
The instruction of the various aspects of defined function action.
Computer-readable program instructions can also be loaded into computer, other programmable data processing units or other
In equipment, so that series of operation steps are executed in computer, other programmable data processing units or other equipment, to produce
Raw computer implemented process, so that executed in computer, other programmable data processing units or other equipment
Instruct function action specified in one or more boxes in implementation flow chart and/or block diagram.
The flow chart and block diagram in the drawings show system, method and the computer journeys according to multiple embodiments of the disclosure
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
One module of table, program segment or a part of instruction, the module, program segment or a part of instruction include one or more use
The executable instruction of the logic function as defined in realizing.In some implementations as replacements, function marked in the box
It can occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be held substantially in parallel
Row, they can also be executed in the opposite order sometimes, and this depends on the function involved.It is also noted that block diagram and/or
The combination of each box in flow chart and the box in block diagram and or flow chart, can the function as defined in executing or dynamic
The dedicated hardware based system made is realized, or can be realized using a combination of dedicated hardware and computer instructions.
The presently disclosed embodiments is described above, above description is exemplary, and non-exclusive, and
It is not limited to disclosed each embodiment.Without departing from the scope and spirit of illustrated each embodiment, for this skill
Many modifications and changes are obvious for the those of ordinary skill in art field.The selection of term used herein, purport
In principle, the practical application or to the technological improvement in market for best explaining each embodiment, or make the art its
Its those of ordinary skill can understand each embodiment disclosed herein.
Claims (10)
1. a kind of image processing method characterized by comprising
Obtain image to be processed;
Objective extraction is carried out to the image to be processed, obtains target image;
Pixels statistics are carried out to the target image, according to the pixels statistics as a result, determining the image place to be processed
Luminous environment.
2. being obtained the method according to claim 1, wherein described carry out Objective extraction to the image to be processed
To target image, comprising:
Processing is split to the image to be processed, segmentation result is obtained, using the segmentation result as the target image.
3. according to the method described in claim 2, obtaining it is characterized in that, described carry out Objective extraction to the image to be processed
To target image, further includes:
Size adjusting is carried out to the segmentation result, using the segmentation result after size adjusting as the target image.
4. according to the method described in claim 3, it is characterized in that, carrying out size adjusting to the segmentation result, comprising:
The segmentation result is cut, and/or, reduce the resolution ratio of the segmentation result.
5. method as claimed in any of claims 1 to 4, which is characterized in that described to be carried out to the target image
Pixels statistics, according to the pixels statistics as a result, luminous environment where determining the image to be processed, comprising:
Count the ratio of dark pixel in the target image, wherein the dark pixel includes at least one channel value
Less than the Color Channel of channel threshold value;
According to the ratio of the dark pixel, the luminous environment where the image to be processed is determined.
6. according to the method described in claim 5, it is characterized in that, the ratio according to the dark pixel, determines institute
State the luminous environment where image to be processed, comprising:
When the ratio of the dark pixel is lower than the first proportion threshold value, the luminous environment where determining the image to be processed is
Backlight environment;
When the ratio of the dark pixel is higher than the second proportion threshold value, the luminous environment where determining the image to be processed is
Half-light environment.
7. method as claimed in any of claims 1 to 6, which is characterized in that the method also includes:
The processing being adapted with the luminous environment is carried out to the image to be processed.
8. a kind of image processing apparatus characterized by comprising
Module is obtained, for obtaining image to be processed;
Object extraction module obtains target image for carrying out Objective extraction to the image to be processed;
First processing module, for carrying out pixels statistics to the target image, according to the pixels statistics as a result, determining institute
State the luminous environment where image to be processed.
9. a kind of electronic equipment characterized by comprising
Processor;
Memory for storage processor executable instruction;
Wherein, it the processor is configured to calling the instruction of the memory storage, is required with perform claim any in 1 to 7
Method described in one.
10. a kind of computer readable storage medium, is stored thereon with computer program instructions, which is characterized in that the computer
Method described in any one of claim 1 to 7 is realized when program instruction is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910381154.2A CN110111281A (en) | 2019-05-08 | 2019-05-08 | Image processing method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910381154.2A CN110111281A (en) | 2019-05-08 | 2019-05-08 | Image processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110111281A true CN110111281A (en) | 2019-08-09 |
Family
ID=67488895
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910381154.2A Pending CN110111281A (en) | 2019-05-08 | 2019-05-08 | Image processing method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110111281A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532113A (en) * | 2019-08-30 | 2019-12-03 | 北京地平线机器人技术研发有限公司 | Information processing method, device, computer readable storage medium and electronic equipment |
CN111343385A (en) * | 2020-03-23 | 2020-06-26 | 东软睿驰汽车技术(沈阳)有限公司 | Method, device, equipment and storage medium for determining environment brightness |
WO2021098609A1 (en) * | 2019-11-22 | 2021-05-27 | 华为技术有限公司 | Method and device for image detection, and electronic device |
CN114760422A (en) * | 2022-03-21 | 2022-07-15 | 展讯半导体(南京)有限公司 | Backlight detection method and system, electronic equipment and storage medium |
WO2023045946A1 (en) * | 2021-09-27 | 2023-03-30 | 上海商汤智能科技有限公司 | Image processing method and apparatus, electronic device, and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103345733A (en) * | 2013-07-31 | 2013-10-09 | 哈尔滨工业大学 | Rapid low-illumination image enhancing method based on improved dark channel prior |
CN104361566A (en) * | 2014-11-17 | 2015-02-18 | 厦门美图之家科技有限公司 | Picture processing method for optimizing dark region |
CN106408526A (en) * | 2016-08-25 | 2017-02-15 | 南京邮电大学 | Visibility detection method based on multilayer vectogram |
CN107454319A (en) * | 2017-07-27 | 2017-12-08 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107451969A (en) * | 2017-07-27 | 2017-12-08 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107682611A (en) * | 2017-11-03 | 2018-02-09 | 广东欧珀移动通信有限公司 | Method, apparatus, computer-readable recording medium and the electronic equipment of focusing |
CN108921823A (en) * | 2018-06-08 | 2018-11-30 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
-
2019
- 2019-05-08 CN CN201910381154.2A patent/CN110111281A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103345733A (en) * | 2013-07-31 | 2013-10-09 | 哈尔滨工业大学 | Rapid low-illumination image enhancing method based on improved dark channel prior |
CN104361566A (en) * | 2014-11-17 | 2015-02-18 | 厦门美图之家科技有限公司 | Picture processing method for optimizing dark region |
CN106408526A (en) * | 2016-08-25 | 2017-02-15 | 南京邮电大学 | Visibility detection method based on multilayer vectogram |
CN107454319A (en) * | 2017-07-27 | 2017-12-08 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107451969A (en) * | 2017-07-27 | 2017-12-08 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107682611A (en) * | 2017-11-03 | 2018-02-09 | 广东欧珀移动通信有限公司 | Method, apparatus, computer-readable recording medium and the electronic equipment of focusing |
CN108921823A (en) * | 2018-06-08 | 2018-11-30 | Oppo广东移动通信有限公司 | Image processing method, device, computer readable storage medium and electronic equipment |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110532113A (en) * | 2019-08-30 | 2019-12-03 | 北京地平线机器人技术研发有限公司 | Information processing method, device, computer readable storage medium and electronic equipment |
WO2021098609A1 (en) * | 2019-11-22 | 2021-05-27 | 华为技术有限公司 | Method and device for image detection, and electronic device |
CN112950525A (en) * | 2019-11-22 | 2021-06-11 | 华为技术有限公司 | Image detection method and device and electronic equipment |
CN111343385A (en) * | 2020-03-23 | 2020-06-26 | 东软睿驰汽车技术(沈阳)有限公司 | Method, device, equipment and storage medium for determining environment brightness |
WO2023045946A1 (en) * | 2021-09-27 | 2023-03-30 | 上海商汤智能科技有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN114760422A (en) * | 2022-03-21 | 2022-07-15 | 展讯半导体(南京)有限公司 | Backlight detection method and system, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110111281A (en) | Image processing method and device, electronic equipment and storage medium | |
CN106651955B (en) | Method and device for positioning target object in picture | |
CN105095881B (en) | Face recognition method, face recognition device and terminal | |
CN109784255B (en) | Neural network training method and device and recognition method and device | |
CN104918107B (en) | The identification processing method and device of video file | |
CN106331504B (en) | Shooting method and device | |
CN110363150A (en) | Data-updating method and device, electronic equipment and storage medium | |
CN109522910A (en) | Critical point detection method and device, electronic equipment and storage medium | |
CN110298310A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107730448B (en) | Beautifying method and device based on image processing | |
CN107944367B (en) | Face key point detection method and device | |
KR20210065180A (en) | Image processing method and apparatus, electronic device and storage medium | |
CN110503023A (en) | Biopsy method and device, electronic equipment and storage medium | |
KR101906748B1 (en) | Iris image acquisition method and apparatus, and iris recognition device | |
CN110532957B (en) | Face recognition method and device, electronic equipment and storage medium | |
CN112219224B (en) | Image processing method and device, electronic equipment and storage medium | |
CN105957037B (en) | Image enchancing method and device | |
CN110378312A (en) | Image processing method and device, electronic equipment and storage medium | |
CN109377446A (en) | Processing method and processing device, electronic equipment and the storage medium of facial image | |
CN105528765A (en) | Method and device for processing image | |
CN108900903A (en) | Method for processing video frequency and device, electronic equipment and storage medium | |
CN107424130B (en) | Picture beautifying method and device | |
CN109255784A (en) | Image processing method and device, electronic equipment and storage medium | |
CN107507128B (en) | Image processing method and apparatus | |
CN111340691A (en) | Image processing method, image processing device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |