CN111182294A - Intelligent household appliance control method for improving image quality and intelligent household appliance - Google Patents

Intelligent household appliance control method for improving image quality and intelligent household appliance Download PDF

Info

Publication number
CN111182294A
CN111182294A CN202010009546.9A CN202010009546A CN111182294A CN 111182294 A CN111182294 A CN 111182294A CN 202010009546 A CN202010009546 A CN 202010009546A CN 111182294 A CN111182294 A CN 111182294A
Authority
CN
China
Prior art keywords
image
working environment
value
household appliance
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010009546.9A
Other languages
Chinese (zh)
Other versions
CN111182294B (en
Inventor
朱泽春
李宏峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Joyoung Household Electrical Appliances Co Ltd
Original Assignee
Hangzhou Joyoung Household Electrical Appliances Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Joyoung Household Electrical Appliances Co Ltd filed Critical Hangzhou Joyoung Household Electrical Appliances Co Ltd
Priority to CN202010009546.9A priority Critical patent/CN111182294B/en
Publication of CN111182294A publication Critical patent/CN111182294A/en
Application granted granted Critical
Publication of CN111182294B publication Critical patent/CN111182294B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/02Diagnosis, testing or measuring for television systems or their details for colour television signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

The invention discloses an intelligent household appliance control method for improving image quality and an intelligent household appliance, wherein the method comprises the following steps: the intelligent household appliance obtains a working environment image and detects an abnormal interference item in the working environment image; determining image parameters according to the abnormal interference item; and optimizing the image quality according to the image parameters. The intelligent household appliance control method for improving the image quality can flexibly select the image parameters according to the working environment or the application scene of the intelligent household appliance, and improve the accuracy of image quality evaluation.

Description

Intelligent household appliance control method for improving image quality and intelligent household appliance
Technical Field
The invention relates to the field of intelligent household appliances, in particular to an intelligent household appliance control method for improving image quality and an intelligent household appliance.
Background
With the development of the intelligent trend, the image and video identification technology is widely applied to the field of household appliances, and a large number of cameras are applied to household appliances and monitoring, so that the image quality evaluation is particularly important. Once a camera has a problem, clear images cannot be acquired, so that related visual applications fail, and intelligent applications such as identification and detection can be influenced. At present, in the field of household appliances, there is no general way to quantify the quality of images, such as whether the images are clear or not, or whether the images are bright or not.
Disclosure of Invention
In a first aspect, the present application provides an intelligent home appliance control method for improving image quality, including:
the method comprises the steps that an intelligent household appliance obtains a working environment image and detects an abnormal interference item in the working environment image;
determining image parameters according to the abnormal interference item;
and optimizing the image quality according to the image parameters.
In a second aspect, the present application provides an intelligent appliance, comprising:
the image acquisition module is used for acquiring a working environment image;
the image detection module is used for detecting abnormal interference items in the working environment image;
the parameter determining module is used for determining image parameters according to the abnormal interference items;
and the image optimization module is used for optimizing the image quality according to the image parameters.
Compared with the prior art, the intelligent household appliance control method for improving the image quality and the intelligent household appliance provided by at least one embodiment of the invention have the following beneficial effects: under the working environment or application scene of the intelligent household appliance, whether abnormal interference items exist when the images are collected by the intelligent household appliance is evaluated according to the images collected by the intelligent household appliance, image parameters are selected according to the abnormal interference items to evaluate and optimize the image quality, the image parameters can be flexibly selected according to the working environment or application scene of the intelligent household appliance, and the accuracy of image quality evaluation is improved.
In addition, when the image parameters are selected according to the abnormal interference item to evaluate the image quality, the quality of the acquired image can be optimized by combining the image quality of the currently acquired image of the intelligent household appliance, so that a preprocessing function is added to the intelligent household appliance, and the frequent wiping or cleaning operations of a user are reduced.
In some embodiments of the present invention, the following effects can also be achieved by determining the image parameters according to the abnormal interference term:
1. a plurality of image parameters can be determined according to the abnormal interference item, and one abnormal interference item can optimize the image according to the plurality of image parameters, so that the optimization efficiency of the image quality and the accuracy of the image quality evaluation are improved.
2. When the image acquisition module is shielded, the combination of definition and brightness is adopted to judge shielding, when the definition is normal, the brightness evaluation is used, when the image brightness does not meet the requirement, the image acquisition module can be directly judged to be shielded, and the shielding condition that the image acquisition module cannot be judged only by performing definition calculation alone is avoided.
3. When greasy dirt steam is attached to the image acquisition module, the oil dirt degree of the image acquisition module can be judged by selecting the color cast value for color evaluation, and when the color cast value does not meet the requirement, the greasy dirt steam is attached to the image acquisition module.
4. When dust is attached to the image acquisition module, the dust pollution degree of the image acquisition module can be judged by selecting the vignetting parameter for vignetting evaluation, when the vignetting parameter does not meet the requirement, the dust attached to the image acquisition module can be directly judged, and the phenomenon that the image quality is influenced by the overlarge area of the vignetting formed when the dust is attached to the image acquisition module is avoided.
In some embodiments of the present invention, the abnormal interference item is determined according to the device type of the intelligent home appliance, and the following effects can be achieved:
1. for oil smoke household appliances, three image parameters of definition, brightness and color cast value can be adopted simultaneously to evaluate and optimize image quality so as to detect whether an image acquisition module of the oil smoke household appliance is shielded or not and detect the degree of oil stain accumulated on the image acquisition module.
2. For stewing household appliances, three image parameters of definition, brightness and vignetting parameters can be adopted to evaluate and optimize the image quality simultaneously so as to detect whether the image acquisition module of the stewing household appliance is shielded or not and detect the degree of dust accumulated on the image acquisition module or the storage bin.
3. For cleaning household appliances, four image parameters of definition, brightness, deflection value and vignetting parameter can be adopted simultaneously to evaluate and optimize image quality so as to detect whether an image acquisition module of the cleaning household appliance is shielded or not and detect the degree of oil stain and dust accumulated on the image acquisition module.
In some embodiments of the present invention, the following effects can also be achieved:
1. during definition evaluation, gradient statistics is carried out by adopting an improved edge detection operator, and the proportion calculation is carried out on the total number of the edge intensity values of the whole image and the total number of the pixel points of the whole image so as to evaluate the definition of the working environment image, so that the difference between the clear image and the blurred image can be enlarged, and the accuracy of the image quality definition evaluation is improved.
2. During brightness evaluation, the brightness of the whole image is calculated, the brightness is compared with a brightness abnormal threshold LT to determine whether the brightness of the image is abnormal or not, and the definition can be assisted through the brightness evaluation to judge whether an image acquisition module of the intelligent household appliance is shielded or not.
In addition, the number of the pixel points of each connected block is proportional to the total number of the pixel points of the whole image (the area of the whole image), and the brightness of the working environment image is secondarily evaluated according to the proportional coefficient, so that the accuracy of image quality and brightness evaluation can be improved.
3. And during color evaluation, converting the image into an LAB space, determining a color value Colorrate according to the deviation value of the image deviation component a and the component b, evaluating the image quality according to the color value Colorrate, determining the color deviation degree of the image, and improving the accuracy of the color evaluation of the image quality.
4. When dark corners are evaluated, four corners and an image center area of the working environment image are extracted, the gray mean value of the four corner areas is compared with the gray value of the image center area respectively to judge whether the four corner areas belong to the dark corners, the image quality is evaluated according to the dark corner proportion, the available area of the working environment image is determined, and the accuracy of image quality dark corner evaluation can be improved.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. Other advantages of the present application may be realized and attained by the instrumentalities and combinations particularly pointed out in the specification and the drawings.
Drawings
The accompanying drawings are included to provide an understanding of the present disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the examples serve to explain the principles of the disclosure and not to limit the disclosure.
Fig. 1 is a flowchart of an intelligent household appliance control method for improving image quality according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image parameter structure according to an embodiment of the present invention;
fig. 3 is a flowchart for determining an abnormal interference item according to a device type of an intelligent home appliance according to an embodiment of the present invention;
FIG. 4 is a flow chart of determining sharpness of an image of a work environment according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating an edge detection operator according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating a method for determining brightness of an image of a work environment according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating a second embodiment of determining brightness of a work environment image;
FIG. 8 is a flow chart of determining a color cast value for a work environment image according to an embodiment of the present invention;
FIG. 9 is a flowchart of determining vignetting parameters of a work environment image according to an embodiment of the present invention;
FIG. 10 is a schematic illustration of the four corners and image center of a work environment image provided by an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an intelligent home appliance according to an embodiment of the present invention.
Detailed Description
The present application describes embodiments, but the description is illustrative rather than limiting and it will be apparent to those of ordinary skill in the art that many more embodiments and implementations are possible within the scope of the embodiments described herein. Although many possible combinations of features are shown in the drawings and discussed in the detailed description, many other combinations of the disclosed features are possible. Any feature or element of any embodiment may be used in combination with or instead of any other feature or element in any other embodiment, unless expressly limited otherwise.
The present application includes and contemplates combinations of features and elements known to those of ordinary skill in the art. The embodiments, features and elements disclosed in this application may also be combined with any conventional features or elements to form a unique inventive concept as defined by the claims. Any feature or element of any embodiment may also be combined with features or elements from other inventive aspects to form yet another unique inventive aspect, as defined by the claims. Thus, it should be understood that any of the features shown and/or discussed in this application may be implemented alone or in any suitable combination. Accordingly, the embodiments are not limited except as by the appended claims and their equivalents. Furthermore, various modifications and changes may be made within the scope of the appended claims.
Further, in describing representative embodiments, the specification may have presented the method and/or process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth herein, the method or process should not be limited to the particular sequence of steps described. Other orders of steps are possible as will be understood by those of ordinary skill in the art. Therefore, the particular order of the steps set forth in the specification should not be construed as limitations on the claims. Further, the claims directed to the method and/or process should not be limited to the performance of their steps in the order written, and one skilled in the art can readily appreciate that the sequences may be varied and still remain within the spirit and scope of the embodiments of the present application.
An embodiment of the present invention provides an intelligent household appliance control method for improving image quality, where fig. 1 is a flowchart of the intelligent household appliance control method for improving image quality provided by the embodiment of the present invention, and as shown in fig. 1, the intelligent household appliance control method for improving image quality provided by the embodiment of the present invention may be used for evaluating whether there is an abnormal interference item when an image is acquired by an intelligent household appliance according to the image acquired by the intelligent household appliance, and specifically may include:
s101: the intelligent household appliance obtains a working environment image and detects an abnormal interference item in the working environment image.
In this embodiment, the intelligent household appliance may include an image acquisition module and a main control chip, wherein the image acquisition module may be a camera for acquiring a working environment image; the main control chip can comprise an image detection module, a parameter determination module and an image optimization module, wherein the image detection module is used for determining an abnormal interference item in a working environment image, the parameter determination module is used for determining an image parameter, and the image optimization module is used for optimizing the image quality according to the image parameter.
The working environment image refers to an image acquired by an image acquisition module in the intelligent household appliance in a visual area range when the intelligent household appliance works in a working environment or an application scene. For example, when the oven is baked, the image acquisition module acquires an image in the oven; or when the electric cooker is stewed, the image acquisition module acquires images in a storage bin or a cooker liner of the electric cooker, and the storage bin is used for temporarily storing food materials to be stewed; or when the sweeper is cleaned, the image acquisition module acquires the image of the surface or the cleaning area of the sweeper.
In this embodiment, when the image quality of the working environment image is evaluated, the image quality of the working environment image may be evaluated according to the device type of the intelligent home appliance, so as to detect whether an abnormal interference item occurs in the working environment image.
In this embodiment, the implementation principle of the image acquisition module for acquiring the working environment image is the same as that in the prior art, and this embodiment is not limited and described herein.
Optionally, detecting an abnormal interference item in the working environment image may include: and determining an abnormal interference item according to the equipment type of the intelligent household appliance, wherein the abnormal interference item can comprise a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module.
In this embodiment, when evaluating the image quality of the working environment image, whether an abnormal interference item exists when the intelligent household appliance collects the image can be determined according to the device type of the intelligent household appliance.
The abnormal interference item can comprise a shelter located in the range of the visual area of the image acquisition module, the shelter is an object sheltered by the image acquisition module, the shelter can comprise a specific device shelter in the range of the visual area of the image acquisition module, and can also comprise a pollutant shelter on the device in the range of the visual area of the image acquisition module. For example, when the oven was toasted, when the food material image in the oven was gathered to the image acquisition module, the shelter had sheltered from the image acquisition module, and its shelter can include specific device: the grill, may also include contaminants on the device: oily water vapor on the grill. Or, electric rice cooker when stewing, when the food material image in the electric rice cooker storage silo was gathered to the image acquisition module, sheltered from the thing and sheltered from the image acquisition module, and it sheltered from the thing and can include specific device: storage bins, may also include contaminants on the devices: and storing dust on the granary. Or, when the sweeper is clean, and the image acquisition module acquires the position image in the sweeper cleaning area, the shielding object shields the image acquisition module, and the shielding object can comprise specific devices: clean zone obstructions, which may also include contaminants on the device: greasy dirt water vapor or dust on the obstacles in the cleaning area.
The abnormal interference item can further comprise pollutants attached to the surface of the lens of the image acquisition module, and the pollutants can comprise oil, water vapor and dust. For example, when the oven is baked, and the image acquisition module acquires an image in the oven, oil stain water vapor is attached to the image acquisition module; or when the electric cooker is stewed and the image acquisition module acquires images in the storage bin or the cooker liner of the electric cooker, the image acquisition module is attached with dust; or when the sweeper is cleaned and the image acquisition module acquires images on the surface of the sweeper or in a cleaning area, oil, water vapor and dust are attached to the image acquisition module.
S102: and determining image parameters according to the abnormal interference item.
Specifically, fig. 2 is a schematic structural diagram of image parameters provided in the embodiment of the present invention, and as shown in fig. 2, the embodiment may set four image parameters: the definition, the brightness, the bias value used for color evaluation and the vignetting parameter used for vignetting evaluation are selected or combined from the four image parameters according to abnormal interference items appearing in the working environment image, namely the image parameters can be flexibly selected according to the working environment or application scene of the intelligent household appliance, and the image quality score is comprehensively calculated so as to evaluate and optimize the image quality and improve the accuracy of image quality evaluation.
Optionally, determining the image parameter according to the abnormal interference term may include: a plurality of image parameters are determined from the anomalous interference terms. In this embodiment, the above four image parameters are not all used in each evaluation and optimization, but may be randomly combined and used for a specific scene or an abnormal disturbance item. According to the image quality evaluation method and device, the image parameters can be determined according to the abnormal interference item, one abnormal interference item can optimize the image according to the image parameters, and the image quality optimization efficiency and the accuracy of image quality evaluation are improved.
S103: and optimizing the image quality according to the image parameters.
In this embodiment, when an abnormal disturbance item is detected in the working environment image, the image quality is optimized according to at least one image parameter.
Optionally, optimizing the image quality according to the image parameter may include the following implementation manners:
the first implementation mode comprises the following steps: and when the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module, determining the definition and brightness of the image of the working environment, and optimizing the image quality according to the definition and brightness.
In this embodiment, when the image capture module is occluded, the sharpness calculation is usually used together with the brightness evaluation, and occlusion is determined by adopting the combination of sharpness and brightness. For example, if the image capturing module is blocked, if the sharpness calculation is performed alone, although an ideal sharpness value can be obtained, the blocking situation of the image capturing module cannot be determined. The definition and the brightness evaluation are combined for use, the brightness evaluation is used when the definition is normal, and when the image brightness does not meet the requirement, the image acquisition module can be directly judged to be shielded, so that the shielding condition that the image acquisition module cannot be judged only by performing the definition calculation alone is avoided.
As shown in fig. 2, the brightness evaluation may include parameters such as brightness, the number of overexplosion points, and the area of overexplosion points. The specific calculation of the definition and the brightness may use the calculation method of the definition and the brightness in the following embodiments, or may use the prior art, which is not limited and described herein.
The second implementation mode comprises the following steps: when the abnormal interference item comprises pollutants attached to the surface of the lens of the image acquisition module, determining a color cast value and a vignetting parameter of the working environment image, and optimizing the image quality according to the color cast value and the vignetting parameter. The color cast value is used for evaluating the color of the working environment image, and the vignetting parameter is used for evaluating the vignetting of the working environment image.
In this embodiment, when the abnormal disturbance item includes a contaminant attached to the lens surface of the image capture module, a contamination source of the contaminant may be determined, and at least one of the polarization value and the vignetting parameter may be selected as the image parameter according to the contamination source, which may specifically include the following two cases:
case 1: and when the pollution source comprises oil stain water vapor, determining the color cast value of the working environment image, and optimizing the image quality according to the color cast value, wherein the color cast value is used for evaluating the color of the working environment image.
In this embodiment, when having attached to greasy dirt steam on the image acquisition module, can select the color cast value that is used for the colour aassessment to the greasy dirt degree of image acquisition module. When the color cast value does not meet the requirement, the oil stain water vapor attached to the image acquisition module can be directly judged.
The bias value basically does not need to be calculated under a normal condition (for example, no pollutant exists on an image acquisition module), and when certain ambiguity obviously exists in calculation of three image parameters of definition, brightness and darkness parameters, the bias value can be used for verification processing.
The specific calculation of the color cast value may use the calculation method of the color cast value in the following embodiments, or may use the prior art, which is not limited and described herein.
Case 2: when the pollution source comprises dust, determining vignetting parameters of the working environment image, and optimizing the image quality according to the vignetting parameters, wherein the vignetting parameters are used for vignetting evaluation of the working environment image.
In practical application, along with the continuous use of intelligent household electrical appliances, the dust can be piled up on the image acquisition module, because the dust is mainly concentrated on the camera lens edge of image acquisition module, the dust is sheltered from partially will form the vignetting, and the dust is too much to be sheltered from completely, and in case the vignetting area is too big, then can influence image quality.
In this embodiment, when dust is attached to the image capturing module, a dark corner parameter for dark corner evaluation may be selected to evaluate the dust pollution degree of the image capturing module. When the vignetting parameter can not meet the requirement, the fact that dust is attached to the image acquisition module can be directly judged, and the problem that the image quality is affected due to the fact that the too large area of the vignetting formed when the dust is attached to the image acquisition module is avoided.
As shown in fig. 2, the vignetting parameter may include parameters such as the vignetting number and the vignetting ratio. The specific calculation of the vignetting parameter may use the calculation method of the vignetting parameter in the following embodiments, or may use the prior art, which is not limited and described herein.
Case 3: when the pollution source comprises oil stain water vapor and dust, determining a color cast value and a vignetting parameter of the working environment image, and optimizing the image quality according to the color cast value and the vignetting parameter, wherein the color cast value is used for evaluating the color of the working environment image, and the vignetting parameter is used for evaluating the vignetting of the working environment image.
In this embodiment, when oil, water vapor and dust are simultaneously attached to the image acquisition module, the number of parameters, namely the vignetting parameter for vignetting evaluation and the color cast value for color evaluation, can be selected to simultaneously optimize the image. When the color cast value does not meet the requirement, the oil stain water vapor attached to the image acquisition module can be directly judged. When the vignetting parameter can not meet the requirement, the fact that dust is attached to the image acquisition module can be directly judged, and the problem that the image quality is affected due to the fact that the too large area of the vignetting formed when the dust is attached to the image acquisition module is avoided.
According to the intelligent household appliance control method for improving the image quality, provided by the embodiment of the invention, under the working environment or application scene of the intelligent household appliance, whether abnormal interference items exist when the images are acquired by the intelligent household appliance is evaluated according to the images acquired by the intelligent household appliance, image parameters are selected according to the abnormal interference items to evaluate and optimize the image quality, the image parameters can be flexibly selected according to the working environment or application scene of the intelligent household appliance, and the accuracy of the image quality evaluation is improved. Meanwhile, when image parameters are selected according to the abnormal interference item to evaluate the image quality, the quality of the acquired image can be optimized by combining the image quality of the current acquired image of the intelligent household appliance, so that a preprocessing function is added to the intelligent household appliance, and the frequent wiping or cleaning operations of a user are reduced.
Further, fig. 3 is a flowchart for determining an abnormal interference item according to a device type of an intelligent home appliance according to an embodiment of the present invention, where on the basis of the foregoing embodiment, the determining an abnormal interference item according to a device type of an intelligent home appliance in this embodiment may include:
s301: and detecting the device type of the intelligent household appliance.
In this embodiment, the abnormal interference item may be determined according to the device type of the intelligent home appliance. The present embodiment mainly evaluates the imaging quality of the image acquisition module (the present embodiment takes a camera as an example) in an application scenario of three types of home appliances, namely, an oil smoke home appliance, a stewing home appliance and a cleaning home appliance, and the image quality evaluation principle of the other device type home appliances is the same as or similar to the image quality evaluation principle of the three types of home appliances, and is not limited and repeated in the present embodiment.
S302: when the intelligent household appliance belongs to oil smoke household appliances, the abnormal interference item is determined to comprise a shelter located in the range of the visual area of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module, and the pollution source of the pollutants comprises oil stain water vapor.
The household appliances capable of generating oil smoke during working can be household appliances with a baking function such as an oven and an air fryer, and the household appliances capable of processing oil smoke during working can be household appliances with an oil smoke absorption function such as a range hood and a kitchen range hood all-in-one machine. In practical application, the working scene of the oil smoke household appliance mainly relates to a pollution source of oil stain water vapor, and therefore, the abnormal interference item of the oil smoke household appliance needs to consider the shielding of the camera and the oil stain water vapor accumulated or attached on the camera.
Accordingly, determining the image parameters according to the abnormal interference term may include: when the intelligent household appliance belongs to oil smoke type household appliances, the image quality is optimized according to the definition, the brightness and the color cast value.
This embodiment, to oil smoke class household electrical appliances, can adopt three image parameter of definition, luminance and color cast value to carry out image quality's aassessment and optimization simultaneously to whether there is the sheltering from in the image acquisition module that detects oil smoke class household electrical appliances, and detect the greasy dirt degree of piling up on the image acquisition module. Wherein, whether use definition and luminance to combine to detect the camera and have sheltering from, use the greasy dirt degree of color cast value detectable camera.
S303: when the intelligent household appliance belongs to stewing household appliances, the abnormal interference item is determined to comprise a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module, and the pollution source of the pollutants comprises dust.
The stewing household appliance is a household appliance with a stewing function and can comprise an electric cooker, a pressure cooker, a wall breaking food processor and the like. In practical application, the working scene of the stewing household appliance mainly relates to a pollution source of dust, so that the abnormal interference item of the stewing household appliance needs to consider the shielding of the camera and the dust accumulated or attached on the camera.
Accordingly, determining the image parameters according to the abnormal interference term may include: when the intelligent household appliance belongs to stewing type household appliances, the image quality is optimized according to the parameters of definition, brightness and vignetting.
This embodiment, to stewing class household electrical appliances, can adopt three image parameter of definition, luminance and vignetting parameter to carry out image quality's aassessment and optimization simultaneously to whether the image acquisition module that detects stewing class household electrical appliances exists and shelters from, and detect the accumulational dust degree on image acquisition module or the storage silo. Wherein, whether use definition and luminance to combine to detect the camera and have sheltering from, use the dust degree of vignetting parameter detectable camera or storage silo.
S304: when the intelligent household appliance belongs to a clean household appliance, the abnormal interference item is determined to comprise a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module, and the pollution sources of the pollutants comprise oil, water vapor and dust.
The cleaning household appliance refers to a household appliance with a cleaning function, and can comprise a sweeper, a washing machine or a dish washing machine and the like. In practical application, the working scene of the cleaning household appliance relates to pollution sources of oil, dirt, water vapor and dust, so that the abnormal interference item of the cleaning household appliance needs to consider the shielding of the camera and the accumulation or attachment of the oil, dirt, water vapor and dust on the camera.
Accordingly, determining the image parameters according to the abnormal interference term may include: when the intelligent household appliance belongs to a cleaning household appliance, the image quality is optimized according to the definition, the brightness, the deflection value and the vignetting parameter.
This embodiment, to clean class household electrical appliances, can adopt four image parameters of definition, luminance, partial colour value and vignetting parameter to carry out image quality's aassessment and optimization simultaneously to whether there is the image acquisition module that detects clean class household electrical appliances to shelter from, and detect accumulational greasy dirt degree and dust degree on the image acquisition module. Wherein, use definition and luminance to combine whether to detect the camera and have and shelter from, use the greasy dirt degree of off colour value detectable camera, use the dust degree of vignetting parameter detectable camera.
The intelligent household appliance control method for improving the image quality provided by the embodiment of the invention can set four image parameters: definition, luminance, the bias value that is used for the colour aassessment and the vignetting parameter that is used for the vignetting aassessment, confirm unusual interference according to the equipment type of intelligent household electrical appliances to make four image parameters can make up and be applied to different scenes, whether can have the shelter from when can realizing collecting the image to household electrical appliances types such as oil smoke class household electrical appliances, stewing class household electrical appliances and cleaning type household electrical appliances, and whether attached with the detection of pollutants such as greasy dirt steam and dust.
Further, fig. 4 is a flowchart for determining the definition of the working environment image according to the embodiment of the present invention, and as shown in fig. 4, determining the definition of the working environment image may include: and determining the definition of the working environment image by adopting a detection operator of n x n, wherein n is an integer greater than 1.
In practical application, the sharpness calculation mode for the image is usually completed by using an edge detection operator, such as Sobel or Canny, but it is found through calculation that the score difference between the sharp image and the blurred image is small when the conventional edge detection operator is used for calculation, the edge mean value of one sharp image is about 1.0, the edge mean value of one blurred image is 0.7-0.8, the difference between the two is only 0.2-0.3, and the distinguishable level is small.
In order to avoid the problem that the level of distinction is small when the conventional edge detection operator calculates the definition, in the embodiment, the conventional edge detection operator is improved, and the definition detection is realized through the gradient statistics of the edge detection operator. The method for detecting the definition by using the detection operator of n × n specifically includes the following steps:
s401: and carrying out gray processing on the working environment image.
In this embodiment, the working environment image acquired by the image acquisition module is subjected to gray scale processing to obtain the gray scale value of each pixel point in the working environment image. The implementation manner of performing the gray processing on the working environment image may adopt the prior art, and this embodiment is not limited and described herein.
S402: using a formula
Figure BDA0002356617310000121
And calculating the edge intensity value of each pixel point.
Wherein, a0 is the gray value of the pixel point located in the first row and the first column in n × n, and Ai is the gray value of the rest pixel points in n × n.
In this embodiment, after the image is subjected to gray processing, the entire image is traversed, and the edge intensity value Q (x, y) of each pixel point is calculated.
Specifically, the present embodiment is described by taking an edge detection operator n-2 as an example,the implementation principle of the remaining values is the same as that of n ═ 2, and this embodiment is not described in detail again. Fig. 5 is a schematic diagram of the edge detection operator according to the embodiment of the present invention, as shown in fig. 5, a0 is a gray scale value of a pixel located in a first row and a first column in the edge detection operator 2 × 2, a1 is a gray scale value of a pixel located in a first row and a second column in the edge detection operator 2 × 2, a2 is a gray scale value of a pixel located in a second row and a first column in the edge detection operator 2 × 2, and a2 is a gray scale value of a pixel located in a second row and a second column in the edge detection operator 2 × 2. Traversing the whole image and adopting formulas in sequence
Figure BDA0002356617310000122
And calculating the edge intensity value of the pixel point positioned in the first row and the first column in the 2 x 2, and determining the edge intensity value of each pixel point in the whole image.
S403: using a formula
Figure BDA0002356617310000123
And accumulating and summing the edge intensity values of each pixel point.
Wherein, w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel point in the horizontal direction, and y is the coordinate of the pixel point in the vertical direction.
In this embodiment, after the edge intensity value of each pixel point in the whole image is determined, the edge intensity values of each pixel point are accumulated and summed to obtain the total number of edge intensity values of the whole image
S404: and calculating the ratio RateQ of the edge intensity value to the total number of pixel points of the whole image by adopting a formula RateQ ═ SumQ/(w × h), wherein RateQ is used for expressing the definition of the working environment image.
The larger RateQ is, the higher the definition of the working environment image is, for example, a definition of RateQ 6.9 > a definition of RateQ 2.4 > a definition of RateQ 1.8.
In this embodiment, the total number of the edge intensity values of the whole image and the total number of the pixel points of the whole image (the area of the whole image) are calculated in proportion to evaluate the definition of the working environment image.
According to the intelligent household appliance control method for improving the image quality, the improved edge detection operator is adopted for carrying out gradient statistics, the total number of the edge intensity values of the whole image and the total number of the pixel points of the whole image are calculated in proportion, the definition of the working environment image is evaluated, the difference between the clear image and the blurred image can be enlarged, and the accuracy of the image quality definition evaluation is improved.
Further, the brightness of the acquired image can be evaluated in the embodiment, the brightness evaluation considers abnormal situations, such as the lens is blocked by a relatively regular object, the definition index of the image is close to normal, and if the definition is considered only, the image quality is difficult to evaluate. The present embodiment adds brightness evaluation, mainly to calculate the overall brightness of the current image. Fig. 6 is a flowchart of determining the brightness of the working environment image according to an embodiment of the present invention, and as shown in fig. 6, determining the brightness of the working environment image may include:
s601: and carrying out gray processing on the working environment image.
In this embodiment, the working environment image acquired by the image acquisition module is subjected to gray scale processing to obtain the gray scale value of each pixel point in the working environment image. The implementation manner of performing the gray processing on the working environment image may adopt the prior art, and this embodiment is not limited and described herein.
S602: using a formula
Figure BDA0002356617310000131
Calculating the overall brightness Lightrate of the work environment image, wherein the Lightrate is used for representing the brightness of the work environment image.
In this embodiment, the brightness Lightrate of the entire image may be determined according to the gray value mean and the occurrence times Hist (j) of the gray values of all the pixel points in the entire image.
Wherein the content of the first and second substances,
Figure BDA0002356617310000141
gray (x, y) represents the Gray value of the pixel point (x, y), w is the width of the working environment image, and h is the height of the working environment imageAnd the degree, x is the coordinate of the pixel point in the horizontal direction, and y is the coordinate of the pixel point in the vertical direction.
Here, Hist (j) represents the number of pixels having a gray value j (which may also be referred to as the number of occurrences of the gray value j) in the working environment image, where j is 0 and 1 … 255 is 255. For example, if j is 1, Hist (j) represents the number of pixels with a gray scale value of 1 in the working environment image, or the number of times gray scale value 1 appears in the working environment image.
In this embodiment, when calculating the overall image brightness Lightrate, the Lightrate may be compared with a brightness abnormality threshold LT to determine whether the brightness of the image is abnormal, where the brightness abnormality may be, for example, an over-bright point, that is, a pixel having a brightness greater than or equal to the brightness abnormality threshold LT. Specifically, when the Lightrate < LT, the brightness is determined to be normal; and when the Lightrate is more than or equal to LT, determining that the brightness is abnormal. Here, the luminance abnormality threshold LT may be set to 0.9.
Optionally, in the brightness evaluation of this embodiment, on the basis of calculating the overall brightness of the current image, the number of over-bright points (over-explosion point number) appearing in the image and the area ratio of the corresponding bright points (over-explosion point area) may also be counted, and the image quality is evaluated secondarily. Specifically, when the overall luminance is greater than or equal to the luminance abnormality threshold LT, the method may further include:
carrying out binarization processing of a fixed threshold value on the image after the gray level processing to obtain a binarized image; performing connected region search on the binary image to obtain M blocks, wherein M is an integer greater than or equal to 1; counting the number of pixel points of each block, and carrying out proportional calculation on the number of the pixel points of each block and the total number of the pixel points of the whole image to obtain a proportional coefficient of each block relative to the whole image; and carrying out secondary evaluation on the brightness of the working environment image according to the scale factor.
In this embodiment, due to the characteristics of the overexposure point: that is, the gray scale of the original image is very high, and the gray scale value is basically about 250-255, so that the embodiment performs the binarization processing of the fixed threshold value on the gray scale image, and the fixed threshold value can be selected from 250-255, thereby obtaining a binarized image.
In this embodiment, a connected region search is performed on the binarized image to obtain a plurality of independent blocks, where an independent block refers to a pixel point where each block is not repeated. And for each independent block, counting the number of pixel points of each block.
Optionally, when a plurality of independent blocks are obtained, for each block, information such as coordinate values of each pixel point in the block is obtained, the minimum circumscribed rectangle of the block is determined according to the coordinate values of each pixel point, the number of pixel points of the minimum circumscribed rectangle of the block is counted, and the number of pixel points of the minimum circumscribed rectangle of the block is used as the number of pixel points of the block. In this embodiment, the number of the pixel points in the block is counted based on the minimum external rectangle of the block, so that the problem that the number of the pixel points cannot be counted when the block is irregular can be avoided.
In this embodiment, after the number of the pixel points of each block is obtained, the number of the pixel points of each block is proportional to the total number of the pixel points of the whole image (the area of the whole image), so as to obtain a proportionality coefficient of each block relative to the whole image, and the quality of the working environment image is secondarily evaluated according to the proportionality coefficient. For example, if the scale factor of a single block exceeds 20%, the risk of affecting the image quality is present, and an evaluation result with poor image quality can be obtained.
Optionally, after obtaining the number of the pixel points of each block, the embodiment may also use the number of the pixel points of each block as the number of the over-bright points of the block, directly determine whether the number of the effective over-exposure points obtained in all the blocks by statistics is within a certain number range, and if so, may ignore the over-bright points; otherwise, judging that the number of the over-bright points in the whole image reaches a certain number, obtaining an evaluation result with poor image quality, and simultaneously carrying out corresponding prompt.
Optionally, in this embodiment, the area of the over-bright point may also be determined according to the number of the over-bright points in the block, and the image quality may be evaluated according to the area of the over-bright point. The determination of the area of the over-bright point according to the number of the over-bright points in the block may adopt the prior art, for example, the area of the over-bright point in the block may be determined according to the minimum coordinate value and the maximum coordinate value corresponding to the over-bright point in the block, which is not described in detail in this embodiment.
Specifically, fig. 7 is a flowchart for determining the brightness of the working environment image according to the second embodiment of the present invention, as shown in fig. 7, which may specifically include the following steps:
s701: and acquiring a working environment image.
S702: and carrying out gray scale processing on the image.
S703: the Lightrate is calculated.
S704: and (5) fixing threshold value binarization.
S705: and acquiring the connected domain.
S706: and counting the number of the pixel points of the connected domain.
S707: and obtaining the over-bright point parameter.
The over-bright point parameter may include: the proportion coefficient of the number of the pixel points of each block to the total number of the pixel points of the whole image, the number of over-bright points or the area of the over-bright points and the like.
According to the intelligent household appliance control method for improving the image quality, the definition can be assisted through brightness evaluation so as to judge whether the image acquisition module of the intelligent household appliance is shielded or not, and the condition that the shielding condition of the image acquisition module cannot be judged only through definition calculation can be avoided. Meanwhile, the number of the pixel points of each connected block is proportional to the total number of the pixel points of the whole image (the area of the whole image), and the quality of the working environment image is secondarily evaluated according to the proportional coefficient, so that the accuracy of image quality and brightness evaluation can be improved.
Further, the present embodiment may also perform color evaluation on the acquired image, where the color evaluation is an evaluation of whether the image acquired by the color camera is close to the original color. The color evaluation is used for judging the degree of the lens oil stain by considering abnormal conditions, such as pollution of the lens by oil stain water vapor. Fig. 8 is a flowchart of determining a color cast value of a work environment image according to an embodiment of the present invention, and as shown in fig. 8, determining the color cast value of the work environment image may include:
s801: and performing space conversion on the working environment image, and converting the RGB image into a Lab image to obtain a component a representing red and green colors, a component b representing blue and yellow colors and a component L representing brightness.
In the embodiment, an RGB space image collected by a camera is converted into a Lab color space, because three components of the Lab color space, L is brightness, a and b are two color channels, and a includes colors from dark green to gray to bright pink; b is from bright blue to gray to yellow. Namely, a represents red and green colors, b represents blue and yellow colors, and once the collected image is biased to one of the a component and the b component, an obvious deviation value appears. The embodiment mainly calculates the deviation value to obtain a color cast value, and the color cast value is used for representing the color cast degree of the image.
S802: and normalizing the components a and b to be 0-255.
In this embodiment, the pixel values of the pixels in the components a and b may be respectively subjected to gray processing, and normalized to 0-255. Optionally, the pixel values of the pixels in L may also be normalized to 0-255.
S803: and respectively calculating the mean value Meana and Meanb of the gray values of all the pixels in the two components a and b.
Wherein the content of the first and second substances,
Figure BDA0002356617310000171
a (x, y) represents the gray value of the pixel (x, y) in the component a of the image, b (x, y) represents the gray value of the pixel (x, y) in the component b of the image, w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel in the horizontal direction, and y is the coordinate of the pixel in the vertical direction.
For example, if there are 4 pixels in the component a, w and h are 1, and the normalized gray-scale values of the pixels a (0,0), a (0,1), a (1,0) and a (1,1) are 50, 123, 200 and 220, respectively, then
Figure BDA0002356617310000172
S804: using a formula
Figure BDA0002356617310000173
The color value Colorrate of the work environment image is calculated, and is used for representing the color cast value of the work environment image.
Figure BDA0002356617310000174
λ is a preset color correction value. Hista (j) represents the number of pixels having a gray scale value j in the a component of the image, Histb (j) represents the number of pixels having a gray scale value j in the b component of the image, and j is 0,1 … 255.
Wherein, λ is a preset color correction value, and λ values are different under different working environments. For example, in a sealed working environment such as an oven household appliance and a stewing household appliance, the lambda can be 1.5; the lambda can be 1.2 in an open working environment such as a clean household appliance.
In this embodiment, when the color value Colorrate is calculated, the Colorrate may be compared with the color cast determination threshold CT to determine the degree of image deviation. Specifically, when the color is less than or equal to CT, determining that the color is normal; color cast is determined when color > CT, the more color cast above CT, the more severe the color cast. Here, the color cast determination threshold CT may be set to 0.5.
In this embodiment, the color value Colorrate is an index for evaluating the degree of oil contamination when the camera shoots an image, and if the more oil contamination is accumulated on the camera, the more oil contamination is accumulated on the lens surface of the camera, so that a film with a certain color is formed, and color cast of different degrees occurs in imaging. In this embodiment, when it is detected that the color value Colorrate is greatly deviated from the normal value, for example, when Colorrate > CT, it is determined that the image color deviation is caused by oil stain, and at this time, an alarm may be given.
The greater the Colorrate is, the more color cast of the image representing the working environment is, the greater the Colorrate exceeds the CT, the more color cast is serious, and the more camera oil stain is accumulated. For example, when the color is 0, the image color is represented as the image original color, and the image color is represented as the original color, the color is represented as the color of the image when the color is 0.
According to the intelligent household appliance control method for improving the image quality, provided by the embodiment of the invention, the degree of oil stains accumulated on the image acquisition module of the intelligent household appliance is judged through color evaluation. Meanwhile, the image is converted into an LAB space, the color value Colorrate is determined according to the deviation value of the image deviation component a and the component b, the image quality is evaluated according to the color value Colorrate, the color deviation degree of the image is determined, and the accuracy of image quality color evaluation can be improved.
Further, the dark corner evaluation can be performed on the acquired image, and the dark corner evaluation considers abnormal situations, such as the lens is blocked due to dirt, oil stain or dust on the lens, so that the blocking phenomenon occurs at four corners or edges of the final image. Fig. 9 is a flowchart of determining an vignetting parameter of a work environment image according to an embodiment of the present invention, and as shown in fig. 9, determining the vignetting parameter of the work environment image may include:
s901: and carrying out gray level processing on the working environment image, and respectively carrying out region extraction on four corners and the image center of the working environment image to obtain five preset regions.
In this embodiment, four corners of the image may be evaluated to determine the image quality thereof. Specifically, fig. 10 is a schematic diagram of four corners and an image center of a work environment image according to an embodiment of the present invention, and as shown in fig. 10, the present embodiment may extract four corners and an image center area of the work environment image.
In this embodiment, when performing region extraction, the method specifically includes: the working environment image is subjected to gray processing, and four corners and the image center are respectively extracted according to the radius dR, namely four quarter circles are respectively extracted from the four corners, and a whole circle is extracted from the center, so that four corners and an image center area shown in fig. 10 are obtained.
S902: the gray level Mean values of five preset regions are respectively calculated, wherein the gray level Mean values of four corner regions are Meani, i is 1,2,3 and 4, and the gray level Mean value of the central region of the image is Mean 5.
In this embodiment, the Mean gray values Mean1-Mean5 of the five regions are calculated, respectively, corresponding to the four corners and the central region. Wherein the gray scale of the regionThe average value may be calculated by using the prior art, for example, the gray values of all the pixels in the region may be averaged, but not limited to, or a formula may be used
Figure BDA0002356617310000191
Calculating the gray average value of each area; can also adopt
Figure BDA0002356617310000192
The mean value of the gray levels of each region is calculated. Gray (x, y) represents the Gray value of the pixel (x, y) in each region, w1 is the image width of each region, h1 is the image height of each region, x is the coordinate of the pixel in the horizontal direction, and y is the coordinate of the pixel in the vertical direction.
S903: and calculating absolute values | Meani-Mean5| of the difference values between the gray level Mean values of the four angular regions and the gray level value of the image central region, and judging whether the working environment image has a dark angle or not according to | Meani-Mean5 |.
In this embodiment, the gray-scale mean of the four corner regions is compared with the gray-scale value of the central region of the image, so as to determine whether the four corner regions belong to dark corners. Specifically, for each four-corner area, when the value of Meani-Mean5 is less than MT, determining that the four-corner area does not belong to a dark corner; when the absolute value of Meani-Mean5 is more than or equal to MT, the four-corner area is determined to belong to the vignetting, and the vignetting of the working environment image is judged. Here, the grayscale threshold MT may be set to 10.
S904: and carrying out area growth on the dark corner area to form a connected domain, counting the number of the pixel points in the connected domain, and carrying out proportional calculation on the number of the pixel points in the connected domain and the total number of the pixel points of one fourth of the whole image to obtain the dark corner occupation ratio of the dark corner area, wherein the dark corner occupation ratio is used for expressing the dark corner parameters of the working environment image.
In this embodiment, the area determined as the vignetting is subjected to area growth, that is, pixels with close gray values in the area are merged to finally form a connected domain, the number of pixels (total number of pixels) in the connected domain is calculated, and the number of pixels is calculated in proportion to the total number of pixels of one fourth of the whole image, so as to obtain the occupation ratio (referred to as the vignetting occupation ratio for short) of the vignetting area. When the dark corner ratio exceeds a threshold (for example, 10%), it is determined that the dark corner affects the image quality and a certain process is required.
Optionally, in this embodiment, when the formed connected domain is irregular, information such as coordinate values of each pixel point in the connected domain is obtained, the minimum external rectangle of the connected domain is determined according to the coordinate values of each pixel point, the number of pixel points of the minimum external rectangle of the connected domain is counted, and the number of pixel points of the minimum external rectangle of the connected domain is used as the number of pixel points of the connected domain.
Optionally, the vignetting parameter may further include a vignetting number, and when it is determined that there are a plurality of vignetting regions, the present embodiment may also determine whether the vignetting affects the image quality according to the vignetting number by counting the vignetting number.
According to the intelligent household appliance control method for improving the image quality, whether dust is accumulated in an image acquisition module of an intelligent household appliance or not and whether the accumulated dust blocks the image acquisition module or not are judged through dark corner evaluation. Meanwhile, four corners and an image center area of the working environment image can be extracted, the gray mean value of the four corner area is compared with the gray value of the image center area respectively to judge whether the four corner area belongs to a dark corner, the image quality is evaluated according to the dark corner proportion, the available area of the working environment image is determined, and the accuracy of image quality dark corner evaluation can be improved.
Fig. 11 is a schematic structural diagram of an intelligent home appliance provided in an embodiment of the present invention, and as shown in fig. 11, the intelligent home appliance provided in an embodiment of the present invention is configured to evaluate whether there is an abnormal interference item when the intelligent home appliance acquires an image according to the image acquired by the intelligent home appliance, and the evaluation may specifically include: an image acquisition module 1101, an image detection module 1102, a parameter determination module 1103, and an image optimization module 1104.
The image acquisition module 1101 is used for acquiring a working environment image;
an image detection module 1102, configured to detect an abnormal interference item in the working environment image;
a parameter determining module 1103, configured to determine an image parameter according to the abnormal interference item;
and an image optimization module 1104 for optimizing image quality according to the image parameter.
The intelligent household appliance provided by the embodiment of the invention is used for executing the technical scheme of the method embodiment shown in fig. 1, the implementation principle and the implementation effect are similar, and details are not repeated here.
Further, in the above embodiment, the determining, by the parameter determining module 1103, the image parameter according to the abnormal interference item may include:
and determining a plurality of image parameters according to the abnormal interference item.
Further, in the above embodiment, the detecting an abnormal interference item in the working environment image by the image detecting module 1102 may include:
and determining an abnormal interference item according to the equipment type of the intelligent household appliance, wherein the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module.
Further, in the above embodiment, the image optimization module 1104 optimizes the image quality according to the image parameter, and may include:
when the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module, determining the definition and brightness of the image of the working environment, and optimizing the image quality according to the definition and brightness;
and/or the presence of a gas in the gas,
when the abnormal interference item comprises pollutants attached to the surface of the lens of the image acquisition module, determining a color cast value and a vignetting parameter of the working environment image, and optimizing the image quality according to the color cast value and the vignetting parameter.
Further, in the foregoing embodiment, the determining, by the image detection module 1102, the abnormal interference item according to the device type of the smart home appliance may include:
detecting the device type of the intelligent household appliance;
when the intelligent household appliance belongs to oil smoke type household appliances, determining that the abnormal interference item comprises a shelter located in the range of a visual area of the image acquisition module and pollutants attached to the surface of a lens of the image acquisition module, wherein a pollution source of the pollutants comprises oil stain water vapor;
when the intelligent household appliance belongs to stewing household appliances, determining that the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module, wherein the pollution source of the pollutants comprises dust;
when the intelligent household appliance belongs to a clean household appliance, determining that the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module, wherein pollution sources of the pollutants comprise oil stain water vapor and dust;
accordingly, the determining module 1103 determines the image parameter according to the abnormal interference item, which may include:
when the intelligent household appliance belongs to oil smoke type household appliances, the image quality is optimized according to the definition, the brightness and the color cast value;
when the intelligent household appliance belongs to stewing type household appliances, optimizing the image quality according to the definition, the brightness and the vignetting parameters;
and when the intelligent household appliance belongs to a cleaning household appliance, optimizing the image quality according to the definition, the brightness, the deflection value and the vignetting parameter.
Further, in the above embodiment, the determining the sharpness of the work environment image by the image optimization module 1104 may include:
determining the definition of the working environment image by adopting a detection operator of n x n, wherein n is an integer greater than 1:
carrying out gray level processing on the working environment image;
using a formula
Figure BDA0002356617310000221
Calculating the edge intensity value of each pixel point;
using a formula
Figure BDA0002356617310000222
Accumulating and summing the edge intensity values of each pixel point;
calculating the ratio RateQ of the edge intensity value to the total number of pixel points of the whole image by adopting a formula RateQ/(w h), wherein RateQ is used for expressing the definition of the working environment image;
the method comprises the steps that A0 is the gray value of a pixel point located in the first row and the first column in n x n, Ai is the gray value of the rest pixel points in n x n, w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel point in the horizontal direction, and y is the coordinate of the pixel point in the vertical direction.
Further, in the above embodiment, the determining the brightness of the work environment image by the image optimization module 1104 may include:
carrying out gray level processing on the working environment image;
using a formula
Figure BDA0002356617310000223
Calculating the overall brightness Lightrate of the work environment image, wherein the Lightrate is used for representing the brightness of the work environment image;
wherein Hist (j) represents the number of pixel points with a gray scale value j in the working environment image, and j is 0,1 … 255;
Figure BDA0002356617310000224
gray (x, y) represents the Gray value of the pixel point (x, y), w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel point in the horizontal direction, and y is the coordinate of the pixel point in the vertical direction.
Further, in the above embodiment, the image optimization module 1104, when the overall luminance Lightrate is greater than or equal to the luminance abnormal threshold LT, is further configured to:
carrying out binarization processing of a fixed threshold value on the image after the gray level processing to obtain a binarized image;
performing connected region search on the binary image to obtain M blocks, wherein M is an integer greater than or equal to 1;
counting the number of pixel points of each block, and carrying out proportional calculation on the number of the pixel points of each block and the total number of the pixel points of the whole image to obtain a proportional coefficient of each block relative to the whole image;
and performing secondary evaluation on the brightness of the working environment image according to the scale coefficient.
Further, in the above embodiments, the determining the color cast value of the work environment image by the image optimization module 1104 may include:
performing space conversion on the working environment image, and converting an RGB image into a Lab image to obtain a component a representing red and green colors, a component b representing blue and yellow colors and a component L representing brightness;
normalizing the components a, b and L into 0-255;
respectively calculating the mean values Meana and Meanb of the gray values of all the pixels in the two components a and b;
using a formula
Figure BDA0002356617310000231
Calculating a color value Colorrate of the working environment image, wherein the Colorrate is used for representing a color cast value of the working environment image;
wherein the content of the first and second substances,
Figure BDA0002356617310000232
Figure BDA0002356617310000233
lambda is a preset color correction value; hista (j) represents the number of pixels with a gray value of j in the a component of the image, Histb (j) represents the number of pixels with a gray value of j in the b component of the image, and j is 0 and 1 … 5; a (x, y) represents the gray value of the pixel (x, y) in the component a of the image, b (x, y) represents the gray value of the pixel (x, y) in the component b of the image, w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel in the horizontal direction, and y is the coordinate of the pixel in the vertical direction.
Further, in the above embodiment, the determining the vignetting parameter of the working environment image by the image optimization module 1104 may include:
performing gray processing on the working environment image, and performing region extraction on four corners and the center of the working environment image respectively to obtain five preset regions;
respectively calculating the gray level Mean values of five preset regions, wherein the gray level Mean values of four corner regions are Meani, i is 1,2,3 and 4, and the gray level Mean value of the central region of the image is Mean 5;
calculating absolute values | Meani-Mean5| of differences between the gray level Mean values of the four angular regions and the gray level value of the image central region, and judging whether a dark angle exists in the working environment image according to | Meani-Mean5 |;
and carrying out area growth on the dark corner area to form a connected domain, counting the number of pixel points in the connected domain, and carrying out proportional calculation on the number of the pixel points in the connected domain and the total number of the pixel points of one fourth of the whole image to obtain the dark corner ratio of the dark corner area, wherein the dark corner ratio is used for expressing the dark corner parameter of the working environment image.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the components may be implemented as software executed by a processor, such as a digital signal processor or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.

Claims (10)

1. An intelligent household appliance control method for improving image quality is characterized by comprising the following steps:
the method comprises the steps that an intelligent household appliance obtains a working environment image and detects an abnormal interference item in the working environment image;
determining image parameters according to the abnormal interference item;
and optimizing the image quality according to the image parameters.
2. The method of claim 1, wherein determining image parameters from the anomalous interference term comprises:
and determining a plurality of image parameters according to the abnormal interference item.
3. The method of claim 2, wherein the detecting an anomalous interference term in the image of the work environment comprises:
and determining an abnormal interference item according to the equipment type of the intelligent household appliance, wherein the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module.
4. The method of claim 3, wherein said optimizing image quality based on said image parameters comprises:
when the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module, determining the definition and brightness of the image of the working environment, and optimizing the image quality according to the definition and brightness;
and/or the presence of a gas in the gas,
when the abnormal interference item comprises pollutants attached to the surface of the lens of the image acquisition module, determining a color cast value and a vignetting parameter of the working environment image, and optimizing the image quality according to the color cast value and the vignetting parameter.
5. The method according to claim 3 or 4, wherein the determining of the abnormal interference item according to the device type of the intelligent household appliance comprises:
detecting the device type of the intelligent household appliance;
when the intelligent household appliance belongs to oil smoke type household appliances, determining that the abnormal interference item comprises a shelter located in the range of a visual area of the image acquisition module and pollutants attached to the surface of a lens of the image acquisition module, wherein a pollution source of the pollutants comprises oil stain water vapor;
when the intelligent household appliance belongs to stewing household appliances, determining that the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module, wherein the pollution source of the pollutants comprises dust;
when the intelligent household appliance belongs to a clean household appliance, determining that the abnormal interference item comprises a shelter located in the visual area range of the image acquisition module and pollutants attached to the surface of the lens of the image acquisition module, wherein pollution sources of the pollutants comprise oil stain water vapor and dust;
correspondingly, the determining the image parameters according to the abnormal interference item comprises:
when the intelligent household appliance belongs to oil smoke type household appliances, the image quality is optimized according to the definition, the brightness and the color cast value;
when the intelligent household appliance belongs to stewing type household appliances, optimizing the image quality according to the definition, the brightness and the vignetting parameters;
and when the intelligent household appliance belongs to a cleaning household appliance, optimizing the image quality according to the definition, the brightness, the deflection value and the vignetting parameter.
6. The method of claim 4, wherein determining the sharpness of the work environment image comprises:
determining the definition of the working environment image by adopting a detection operator of n x n, wherein n is an integer greater than 1:
carrying out gray level processing on the working environment image;
using a formula
Figure FDA0002356617300000021
Calculating the edge intensity value of each pixel point;
using a formula
Figure FDA0002356617300000022
Accumulating and summing the edge intensity values of each pixel point;
calculating the ratio RateQ of the edge intensity value to the total number of pixel points of the whole image by adopting a formula RateQ/(w h), wherein RateQ is used for expressing the definition of the working environment image;
the method comprises the steps that A0 is the gray value of a pixel point located in the first row and the first column in n x n, Ai is the gray value of the rest pixel points in n x n, w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel point in the horizontal direction, and y is the coordinate of the pixel point in the vertical direction.
7. The method of claim 4, wherein determining the brightness of the work environment image comprises:
carrying out gray level processing on the working environment image;
using a formula
Figure FDA0002356617300000031
Calculating the overall brightness Lightrate of the work environment image, wherein the Lightrate is used for representing the brightness of the work environment image;
wherein Hist (j) represents the number of pixel points with a gray scale value j in the working environment image, and j is 0,1 … 255;
Figure FDA0002356617300000032
gray (x, y) represents the Gray value of the pixel point (x, y), w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel point in the horizontal direction, and y is the coordinate of the pixel point in the vertical direction;
when the overall luminance Lightrate is greater than or equal to a luminance anomaly threshold LT, the method further comprises:
carrying out binarization processing of a fixed threshold value on the image after the gray level processing to obtain a binarized image;
performing connected region search on the binary image to obtain M blocks, wherein M is an integer greater than or equal to 1;
counting the number of pixel points of each block, and carrying out proportional calculation on the number of the pixel points of each block and the total number of the pixel points of the whole image to obtain a proportional coefficient of each block relative to the whole image;
and performing secondary evaluation on the brightness of the working environment image according to the scale coefficient.
8. The method of claim 4, wherein determining a color cast value for the work environment image comprises:
performing space conversion on the working environment image, and converting an RGB image into a Lab image to obtain a component a representing red and green colors, a component b representing blue and yellow colors and a component L representing brightness;
normalizing both the components a and b to be between 0 and 255;
respectively calculating the mean values Meana and Meanb of the gray values of all the pixels in the two components a and b;
using a formula
Figure FDA0002356617300000041
Calculating a color value Colorrate of the working environment image, the Colorrate being used to represent a color cast value of the working environment image;
wherein the content of the first and second substances,
Figure FDA0002356617300000042
Figure FDA0002356617300000043
lambda is a preset color correction value; hista (j) represents the number of pixels with a gray value of j in the a component of the image, Histb (j) represents the number of pixels with a gray value of j in the b component of the image, and j is 0 and 1 … 5; a (x, y) represents the gray value of the pixel (x, y) in the component a of the image, b (x, y) represents the gray value of the pixel (x, y) in the component b of the image, w is the width of the working environment image, h is the height of the working environment image, x is the coordinate of the pixel in the horizontal direction, and y is the coordinate of the pixel in the vertical direction.
9. The method of claim 4, wherein determining vignetting parameters for the work environment image comprises:
performing gray processing on the working environment image, and performing region extraction on four corners and the center of the working environment image respectively to obtain five preset regions;
respectively calculating the gray level Mean values of five preset regions, wherein the gray level Mean values of four corner regions are Meani, i is 1,2,3 and 4, and the gray level Mean value of the central region of the image is Mean 5;
calculating absolute values | Meani-Mean5| of differences between the gray level Mean values of the four angular regions and the gray level value of the image central region, and judging whether the working environment image has a vignetting angle according to | Meani-Mean5 |;
and carrying out area growth on the dark corner area to form a connected domain, counting the number of pixel points in the connected domain, and carrying out proportional calculation on the number of the pixel points in the connected domain and the total number of the pixel points of one fourth of the whole image to obtain the dark corner ratio of the dark corner area, wherein the dark corner ratio is used for expressing the dark corner parameter of the working environment image.
10. An intelligent appliance, comprising:
the image acquisition module is used for acquiring a working environment image;
the image detection module is used for detecting abnormal interference items in the working environment image;
the parameter determining module is used for determining image parameters according to the abnormal interference item;
and the image optimization module is used for optimizing the image quality according to the image parameters.
CN202010009546.9A 2020-01-06 2020-01-06 Intelligent household appliance control method for improving image quality and intelligent household appliance Active CN111182294B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010009546.9A CN111182294B (en) 2020-01-06 2020-01-06 Intelligent household appliance control method for improving image quality and intelligent household appliance

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010009546.9A CN111182294B (en) 2020-01-06 2020-01-06 Intelligent household appliance control method for improving image quality and intelligent household appliance

Publications (2)

Publication Number Publication Date
CN111182294A true CN111182294A (en) 2020-05-19
CN111182294B CN111182294B (en) 2021-11-30

Family

ID=70650778

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010009546.9A Active CN111182294B (en) 2020-01-06 2020-01-06 Intelligent household appliance control method for improving image quality and intelligent household appliance

Country Status (1)

Country Link
CN (1) CN111182294B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508898A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Method and device for detecting fundus image and electronic equipment
CN113160156A (en) * 2021-04-12 2021-07-23 佛山市顺德区美的洗涤电器制造有限公司 Method for processing image, processor, household appliance and storage medium
CN113229767A (en) * 2021-04-12 2021-08-10 佛山市顺德区美的洗涤电器制造有限公司 Method for processing image, processor, control device and household appliance

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107560301A (en) * 2017-08-28 2018-01-09 合肥美的智能科技有限公司 Control method, control device and the refrigerator of refrigerator
CN109951635A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium
CN209404514U (en) * 2018-10-16 2019-09-20 珠海格力电器股份有限公司 Cooking apparatus and cooking apparatus image collecting device
CN110889801A (en) * 2018-08-16 2020-03-17 九阳股份有限公司 Decontamination optimization method for camera of smoke stove system and smoke stove system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107560301A (en) * 2017-08-28 2018-01-09 合肥美的智能科技有限公司 Control method, control device and the refrigerator of refrigerator
CN110889801A (en) * 2018-08-16 2020-03-17 九阳股份有限公司 Decontamination optimization method for camera of smoke stove system and smoke stove system
CN209404514U (en) * 2018-10-16 2019-09-20 珠海格力电器股份有限公司 Cooking apparatus and cooking apparatus image collecting device
CN109951635A (en) * 2019-03-18 2019-06-28 Oppo广东移动通信有限公司 It takes pictures processing method, device, mobile terminal and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112508898A (en) * 2020-11-30 2021-03-16 北京百度网讯科技有限公司 Method and device for detecting fundus image and electronic equipment
CN113160156A (en) * 2021-04-12 2021-07-23 佛山市顺德区美的洗涤电器制造有限公司 Method for processing image, processor, household appliance and storage medium
CN113229767A (en) * 2021-04-12 2021-08-10 佛山市顺德区美的洗涤电器制造有限公司 Method for processing image, processor, control device and household appliance
CN113229767B (en) * 2021-04-12 2022-08-19 佛山市顺德区美的洗涤电器制造有限公司 Method for processing image, processor, control device and household appliance

Also Published As

Publication number Publication date
CN111182294B (en) 2021-11-30

Similar Documents

Publication Publication Date Title
CN111182294B (en) Intelligent household appliance control method for improving image quality and intelligent household appliance
JP6179827B2 (en) How to generate a focus signal
CN109257582B (en) Correction method and device for projection equipment
US8630504B2 (en) Auto-focus image system
EP2107521B1 (en) Detecting a border region in an image
CN105338342A (en) Image dead pixel detection method and device
CN109598723B (en) Image noise detection method and device
CN105451015A (en) Detection method and device for image dead pixels
CN103262524B (en) Automatic focusedimage system
JP4197768B2 (en) Information reading system
CN103650473B (en) Automatic focusedimage method
JP2019128197A (en) Attached matter detector and attached matter detection method
JP6057086B2 (en) Method for determining image sharpness and image pickup device
EP2649787A1 (en) Auto-focus image system
JP2014504375A5 (en)
CN111985273B (en) Image processing method of intelligent household appliance and intelligent household appliance
JP2021052238A (en) Deposit detection device and deposit detection method
Liao et al. Dirt detection on camera module using stripe-wise background modeling
CN114881895A (en) Infrared image stripe noise processing method based on interframe difference
Bardos et al. Measuring noise in colour images
JPS61206388A (en) Picture pattern matching method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant