CN110872861A - Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function - Google Patents

Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function Download PDF

Info

Publication number
CN110872861A
CN110872861A CN201811012524.7A CN201811012524A CN110872861A CN 110872861 A CN110872861 A CN 110872861A CN 201811012524 A CN201811012524 A CN 201811012524A CN 110872861 A CN110872861 A CN 110872861A
Authority
CN
China
Prior art keywords
image
water spray
optical lens
intelligent
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811012524.7A
Other languages
Chinese (zh)
Inventor
李俊录
高强
陈熙宇
庄起龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magic Water Technology (beijing) Co Ltd
Original Assignee
Magic Water Technology (beijing) Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Magic Water Technology (beijing) Co Ltd filed Critical Magic Water Technology (beijing) Co Ltd
Priority to CN201811012524.7A priority Critical patent/CN110872861A/en
Publication of CN110872861A publication Critical patent/CN110872861A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E03WATER SUPPLY; SEWERAGE
    • E03DWATER-CLOSETS OR URINALS WITH FLUSHING DEVICES; FLUSHING VALVES THEREFOR
    • E03D9/00Sanitary or other accessories for lavatories ; Devices for cleaning or disinfecting the toilet room or the toilet bowl; Devices for eliminating smells
    • E03D9/08Devices in the bowl producing upwardly-directed sprays; Modifications of the bowl for use with such devices ; Bidets; Combinations of bowls with urinals or bidets; Hot-air or other devices mounted in or on the bowl, urinal or bidet for cleaning or disinfecting

Abstract

The invention provides an intelligent water spray rod, an intelligent toilet lid and a flushing system with a visual function, wherein an image acquisition module is arranged on the intelligent water spray rod, and an image acquisition end face of the image acquisition module and a spray head of the water spray rod are positioned on the same side of a body. The image acquisition module can acquire the image of the part to be observed when visualization is needed, converts the acquired image into image data and forwards the image data to the terminal equipment, and the terminal equipment displays the image of the part to be observed for use. Therefore, a user can see the flushing effect of the part to be observed in real time through the terminal equipment, can conveniently see the private part and the surrounding conditions of the private part, observe the skin wound condition of the haemorrhoids or the private part and the like, and solve the problems that the cleaning effect cannot be confirmed and the private parts cannot be checked.

Description

Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function
Technical Field
The invention relates to the technical field of sanitary ware, in particular to an intelligent water spray rod, an intelligent toilet lid and a flushing system with a visual function.
Background
The most basic function of the intelligent toilet cover is flushing, common flushing functions comprise rear cleaning and front cleaning, and the cleaning function is realized by arranging a water spraying rod and controlling the water spraying rod to spray water to a part to be observed.
Although the user pays attention to the flushing function and the effect of 'flushing clean' of the intelligent toilet lid, the user cannot determine whether the part to be observed really achieves the effect of cleaning after being flushed by water spraying. The flushing effect of the intelligent toilet cover is possibly more concerned for users in special conditions (such as patients with hemorrhoids, users with skin itch caused by sedentary sitting, users with hip skin discomfort caused by mosquito bites, patients with pain or physical injury of private parts and the like). However, the existing intelligent toilet lid cannot enable a user to confirm the flushing effect of the part to be observed.
Accordingly, there is a need for an improvement of the existing intelligent toilet lid to solve the above problems.
Disclosure of Invention
The invention aims to solve the technical problem that a user cannot confirm the flushing effect of an intelligent toilet lid in the prior art, and further provides an intelligent water spraying rod, an intelligent toilet lid and a flushing system with a visual function.
Therefore, the invention provides an intelligent water spray rod, which comprises an image acquisition module, wherein:
the image acquisition module is arranged on the body of the water spray rod, and the image acquisition end surface of the image acquisition module and the spray head of the water spray rod are positioned on the same side of the body;
and the image acquisition module acquires the image of the part to be observed after responding to the visualization request signal and outputs image data obtained based on the image to the terminal equipment.
Optionally, the intelligent water spray pole further comprises a detection module:
the detection module is used for detecting whether the spray head of the water spray rod sprays water to the cleaned part, and when the detection module detects that the spray head is switched from a water spraying state to a water spraying stopping state, the detection module outputs a visualization request signal to the image acquisition module.
Optionally, the intelligent water spray pole further comprises a light emitting element and a control switch:
the light-emitting element is arranged on the body, and the power supply end of the light-emitting element is connected to a power supply through the control switch;
and the control switch is turned on after responding to the visual request signal, so that the light-emitting element emits light after obtaining electric energy.
Optionally, in the intelligent water spray bar, the image capturing module includes a driving mechanism, an optical lens, an image detector, and an image processor:
the optical lens is arranged on the driving output end of the driving mechanism and used for acquiring and outputting a target image of a part to be observed;
the image detector acquires the target image and outputs an image signal obtained based on the target image;
the image processor is used for acquiring the image signal and determining whether the focal length of the optical lens needs to be adjusted or not according to the image signal and a preset automatic focusing model; outputting image data obtained based on the target image if the focal length of the optical lens does not need to be adjusted; and if the focal length of the optical lens needs to be adjusted, controlling the driving mechanism to drive the optical lens to move according to a set step length.
Optionally, the smart water wand as described above, the preset autofocus model in the image processor is for:
carrying out frequency domain transformation processing on the image signal to obtain frequency domain image data;
acquiring a high-frequency component and a direct-current component in the frequency domain image data; acquiring a relative high-frequency component of the image signal according to the high-frequency component and the direct-current component;
if the relative high-frequency component is the maximum value, the focal length of the optical lens does not need to be adjusted.
Optionally, in the intelligent water spray rod, the image detector is further configured to process the target image, and obtain the image data according to a gray value of each row of pixel points in a horizontal direction of the target image.
Optionally, in the intelligent water spray rod, the image detector is further configured to process the target image, determine a boundary line of a sudden change in gray level of a pixel point in the target image, and use an area surrounded by the boundary line as a key area;
and acquiring the gray value of each pixel point of the minimum operation area in the key area as the image data.
Optionally, in the intelligent water spray rod, the image detector is further configured to process the target image, and acquire a target region in the target image by using a convolutional neural network algorithm;
and acquiring the gray value of each pixel point of the minimum operation area in the target part area as the image data.
Optionally, in the above intelligent water spray bar, the image processor acquires the image signal, and determines whether the focal length of the optical lens needs to be adjusted according to the image signal in combination with a preset auto-focusing model, including:
in the initial stage, after the image signal is acquired for the first time and the relative high-frequency component is obtained, the focal length of the optical lens is adjusted according to a preset step length and a preset direction;
a trend judgment stage, namely acquiring the image signal and the relative high-frequency component after the focal length adjustment is finished each time, and searching a peak point of the relative high-frequency component according to the variation trend of the relative high-frequency component;
a reverse adjustment stage, namely changing the focusing direction and reducing the preset step length, and then returning to the adjustment stage;
and repeating the forward adjusting stage and the backward adjusting stage until the preset step length is reduced to the lowest value, and stopping adjusting the focal length of the optical lens when a relative high-frequency component peak point appears again.
Optionally, the intelligent water spray bar as described above, the image processor further configured to:
restoring according to the image data to obtain a target image, comparing the target image with the sample image, and determining whether the part to be observed is normal or not according to a comparison result;
if the part to be observed is abnormal, a prompt signal is output to remind the user.
Optionally, the intelligent water spray rod further comprises a sterilizing module:
the degerming module is arranged inside a spray head of the water spray rod; the degerming module is used for releasing a high-frequency electric signal to the water in the spray head so as to electrolyze partial water molecules to obtain positive and negative ions.
Optionally, in the intelligent water spraying rod, the high-frequency electric signal released by the sterilizing module enables the water in the spray head to have a voltage of 1-3V.
The invention also provides an intelligent toilet lid, which is provided with the intelligent water spray rod.
The invention also provides a flushing system with a visual function, which comprises the intelligent water spray rod and a terminal device provided with a display module, wherein the intelligent water spray rod comprises a water spray pipe body and a water spray pipe body, the intelligent water spray pipe body comprises a water spray pipe body, the intelligent water spray pipe body is provided with a water spray pipe, the intelligent water spray pipe body is provided: the terminal equipment receives the image data sent by the intelligent water spray rod, restores the image data into an image of a part to be observed corresponding to the image data, and displays the image through the display module for a user to observe.
Compared with the prior art, any technical scheme provided by the invention at least has the following beneficial effects:
according to the intelligent water spray rod, the intelligent toilet lid and the flushing system with the integrated visualization function, the image acquisition module is arranged on the water spray rod, and the image acquisition end face of the image acquisition module and the spray head of the water spray rod are located on the same side of the body. The image acquisition module can acquire the image of the part to be observed when visualization is needed, converts the acquired image into image data and forwards the image data to the terminal equipment, and the terminal equipment displays the image of the part to be observed for use. Therefore, the user can see the flushing effect of the part to be observed in real time through the terminal equipment, the private part and the surrounding conditions of the private part can be conveniently seen, the skin wound condition of the haemorrhoids or the private part and the like can be observed, the problems that the cleaning effect cannot be confirmed and the private part cannot be seen in the prior art are solved, and convenience is brought to the user.
Drawings
Fig. 1 is a schematic structural diagram of an intelligent water spray pole according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an intelligent water spray pole according to another embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image acquisition module according to an embodiment of the present invention;
FIG. 4 is a flowchart illustrating a method for automatically focusing an optical lens according to an embodiment of the present invention;
FIG. 5 is a network architecture diagram of a convolutional neural network, in accordance with one embodiment of the present invention;
FIG. 6 is a diagram of a portion of a training sample of a convolutional neural network, in accordance with one embodiment of the present invention;
FIG. 7 is a partial test sample diagram of a convolutional neural network, in accordance with one embodiment of the present invention;
FIG. 8 is a graph illustrating a relationship between a lens position and an image sharpness during a focusing process of a successive approximation algorithm according to an embodiment of the present invention;
FIG. 9 is a flowchart of the successive approximation algorithm focusing according to an embodiment of the present invention;
FIG. 10 is a functional block diagram of an irrigation system incorporating visualization capabilities according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be further described with reference to the accompanying drawings. In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description of the present invention, and do not indicate or imply that the device or assembly referred to must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Wherein the terms "first position" and "second position" are two different positions.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; the two components can be directly connected or indirectly connected through an intermediate medium, and the two components can be communicated with each other. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Example 1
The embodiment provides an intelligent water spray rod 100, as shown in fig. 1, which includes an image acquisition module 101, wherein the image acquisition module 101 is disposed on a body 102 of the water spray rod, a nozzle 103 is disposed on the body 102, and the image acquisition module 101 and the nozzle 103 are disposed on the same plane of the body 102; the image acquisition module 101 acquires an image of a portion to be observed after responding to the visualization request signal and outputs image data obtained based on the image to the terminal device. The terminal equipment can be selected to be a handheld mobile terminal such as a mobile phone and a PAD, received image data can be restored to be an image of the cleaned part, and the image is displayed through a display screen of the terminal equipment. In addition, the terminal device may also be a screen and a curtain suitable for displaying images and arranged in front of the intelligent toilet lid, and when the user does not carry the mobile terminal in the using process, the image acquisition module 101 directly sends the image data to the terminal device for displaying. The image acquisition module 101 and the terminal device can automatically delete the existing image data in the image acquisition module 101 and the terminal device in time, so that the personal privacy right of a user is ensured.
Wherein, image acquisition module 101 can adopt current high definition digtal camera to realize, and it has great shooting scope, can acquire the user comprehensively and sit the comprehensive image of buttock within range when intelligent closestool covers to can reduce the requirement to its angle that sets up.
In the above scheme, the visualization request signal may be designed according to different situations. For example, an actuating switch is arranged on the intelligent water spraying rod, and the control end of the actuating switch can be arranged on the intelligent closestool cover at a position suitable for being controlled by a user. For the user, no matter whether the user uses the flushing function of the intelligent water spray rod, the user can send a visualization request signal to the image acquisition module 101 by controlling the starting switch. When the washing function is used, the washing effect can be determined according to the image, and when the washing function is not used, the conditions of private diseases such as hemorrhoids and wounds can be determined according to the image.
Preferably, when the washing function is used, the image of the part to be observed can be automatically photographed after washing is completed, specifically, the image can be photographed by setting a detection module in the interior of the spray head 103, or on the surface of the spray head 103, or in a water flow path in a water spraying process of the spray head, wherein the detection module is used for detecting whether the spray head of the water spray rod sprays water to the part to be cleaned, and when the detection module detects that the spray head is switched from a water spraying state to a water spraying stopping state, a visualization request signal is output to the image acquisition module. The detection module is for detecting water velocity of flow or pressure etc. and the detection result that detection module detected when the shower nozzle sprays water is water velocity of flow or pressure value and the shower nozzle when not spraying water certainly is different, therefore it can very conveniently detect the shower nozzle and change the switching node that stops the water spray from the water spray conversion, and the shower nozzle is changed from the water spray and stops the water spray and can show to wash and accomplish, sends visual request signal request to image acquisition module this moment and visualizes the cleaning performance, very suitable opportunity.
In addition, the intelligent water spray rod can also comprise a light-emitting element and a control switch, wherein the light-emitting element is arranged on the body, and the position requirement of the light-emitting element can be properly relaxed if the light-emitting element has a larger light-emitting range. The power supply end of the light-emitting element is connected to a power supply through the control switch; and the control switch is turned on after responding to the visualization request signal (the visualization request signal is acquired in the manner described above) so that the light-emitting element emits light after obtaining electric energy. The arrangement position of the light-emitting element can be determined according to the type of the light-emitting element, if the range of the light beam emitted by the light-emitting element is small, the light-emitting element can be arranged beside the image acquisition module 101, and when the image acquisition function needs to be started, the light-emitting element emits light to improve the brightness of the environment where the part to be observed is located, so that the image acquisition module can acquire a clear image.
As a preferable scheme, as shown in fig. 2, the intelligent water spray bar may further include a sterilization module 104, and the sterilization module 104 may be disposed inside a nozzle of the water spray bar or in a water flow path; the degerming module 104 can be implemented by an integrated chip, for example, an IR2136 chip, which is used for releasing a high-frequency electric signal to the water in the nozzle to electrolyze part of water molecules to obtain positive and negative ions, specifically, a composite circuit of the integrated chip can process an accessed common mains voltage signal into a high-frequency alternating-current square wave signal, and electrolyze the water flow at a high speed before the water flow reaches the nozzle, so that the water flow is filled with the positive and negative water ions, further, the positive and negative water ions can be attached with a voltage of about 1-3V under the action of the degerming module 104, and the water ions with the voltage are ejected and then contact the flushed part to perform instantaneous discharge, so that cells on the surface of bacteria can be perforated, and bacteria and viruses on the flushed part can be. In the above, the degerming module 104 can be maintained in an operating state in real time, and whether the degerming module 104 operates can also be determined according to whether the spray head sprays water, for example, when the spray head starts the water spraying operation, the degerming module 104 can also be started at the same time, and when the spray head stops spraying water, the degerming module 104 stops operating, so that the power consumption can be reduced.
In addition, referring to fig. 2, the nozzles may be divided into a cleaning nozzle and a sterilization nozzle according to their functions, in the figure, the water sprayed from the nozzle 103 may be used to wash the part to be washed, the atomizer 105 may be used to sterilize the part to be washed, and the sterilization module 104 may electrolyze and apply voltage to the water sprayed from the atomizer 105.
Example 2
In the intelligent water spray rod provided by this embodiment, the image capturing module 101 is shown in fig. 3, and includes an optical lens 11, a driving mechanism 13, an image detector 12 and an image processor 14, where an output end of the image processor 14 is connected to the terminal device 200 as an output end of the image capturing module 101.
Due to the particularity of the part to be observed, the key technology in the image acquisition module 101 is the focus detection process of the optical lens, and the process has great influence on the distortion detection, shadow detection, white balance test, resolution test and other key image processing characteristics of the image, so that the focusing process of the optical lens is crucial, the scheme in the embodiment adopts an automatic focusing method, and the basic principle is as follows: the method comprises the steps of adopting an automatic focusing method established on a search algorithm, evaluating the imaging definition of different optical lens focusing positions through an evaluation function, finding the correct focusing position by utilizing the characteristic that the image is clearest at the correct focusing position, and taking the clearest image as the shot image of the part to be observed. Briefly summarizing this process as follows: the part to be observed is imaged on the image detector 12 through the optical lens 11, the image processor 14 performs preprocessing, evaluation and analysis after obtaining the image, adjusts the position of the optical lens 11 through the control driving mechanism 13 until the imaging quality on the image detector 12 reaches the best, then the image processor 14 converts the image into image data and sends the image data to the terminal equipment, and the terminal equipment displays the image. According to the above principle, the structure of the image capturing module 101 can be designed as follows:
the optical lens 12 is arranged at the driving output end of the driving mechanism 11 and is used for acquiring and outputting a target image of a part to be observed; the driving mechanism 11 may be implemented by using a driving motor, and a driving shaft of the driving motor is used as a driving output end thereof. The optical lens 12 can be fixed at the shaft end of the driving shaft of the driving motor through a simple fixing seat, and when the driving shaft of the driving motor extends out or retracts, the optical lens 12 can be driven to move along the optical main shaft direction, so that the focal length of the optical lens can be adjusted. The image detector 12 acquires the target image and outputs an image signal obtained based on the target image; the image detector 12 may also be implemented by a commercially available image detector, the optical lens 12 may image the portion to be observed on a detection surface of the image detector, and the image detector converts the detected imaging signal into an image signal and outputs the image signal, where the image signal is an electrical signal. The image processor 14 acquires the image signal, and determines whether the focal length of the optical lens needs to be adjusted according to the image signal in combination with a preset automatic focusing model; outputting image data obtained based on the target image if the focal length of the optical lens does not need to be adjusted; and if the focal length of the optical lens needs to be adjusted, controlling the driving mechanism to drive the optical lens to move according to a set step length. That is, the image capturing module 101 needs to ensure that the image data of the portion to be observed, which is finally output by the image capturing module, corresponds to the clearest and most effective image captured by the image capturing module. In order to achieve the effect, a plurality of images need to be acquired in the process of changing the focal length of the optical lens 11, each image is judged by the automatic focusing model after being acquired, and if the image is the clearest, the image data corresponding to the image can be directly sent to the terminal device, so that the image restored by the terminal device according to the image data is the clearest, and the user can conveniently observe the image. The automatic focusing method in the scheme has the following advantages:
(1) in the scheme, the input digital image is adopted to evaluate the image definition, and the method does not depend on other factors, so that the interference factors are relatively few, the stability is relatively high, and the adaptability is wide.
(2) The prior art has been provided with a variety of mature image sharpness evaluation algorithms for selection. Different algorithms have different computation amounts and sensitivities, and the computation amounts and the sensitivities can be set through software according to actual requirements, so that the method has good flexibility.
(3) Can focus to optical lens through intelligent mode, focusing is judged and is had flexibility and variety, is applicable to different human different position differentiation.
Specifically, the three key algorithms involved in the automatic focusing process are respectively: an image definition evaluation algorithm (namely, a process of determining whether an image is the clearest or not by the preset automatic focusing model), a focusing window selection algorithm (namely, determining which part in the image acquired by the image acquisition module is selected for definition judgment), and an automatic focusing search algorithm (namely, selection of a focusing direction and a focusing step length).
(1) Image definition evaluation algorithm:
as shown in fig. 4, the implementation process of the image sharpness evaluation algorithm is shown, that is, the process of determining whether an image is the clearest by the preset autofocus model includes:
s101: carrying out frequency domain transformation processing on the image signal to obtain frequency domain image data; in the prior art, a high definition camera with an image acquisition function outputs a digital image signal. The method of performing frequency domain transformation processing on a digital image signal in a frequency domain to realize image sharpness evaluation is called a frequency domain processing method. In this embodiment, the method using the improved discrete cosine transform is a frequency domain transform method, and the transform coefficients are all real numbers, and the discrete cosine transform coefficients can be used to represent the frequency distribution change of the image, and the discrete cosine transform formula is as follows:
Figure BDA0001785360680000091
u=0,1,...,M-1,v=0,1,...,N-1;
wherein the content of the first and second substances,
Figure BDA0001785360680000101
in the above formula, f (x, y) is a function expression of a two-dimensional vector of a spatial domain, which is an image signal, the image signal is divided according to rows and columns, x and y respectively represent the numbers of the row and the column where a pixel is located, f (x, y) represents the information of the pixel in the x-th row and the y-th column, and the gray value, the brightness value, and the like of the pixel can be selected according to actual needs. F (u, v) is a functional expression of the transform coefficient array, i.e., a frequency domain function obtained by frequency domain transforming the image signal. N is the number of pixel lines of the image, and M is the number of pixel columns of the image. The image signal is subjected to frequency domain transformation by adopting the formula, so that the dispersion is better.
S102: acquiring high-frequency components and direct-current components in the frequency domain image data, wherein the high-frequency components refer to frequency ranges from 3MHz to 30MHzImage data within the enclosure, or defining high frequency components according to the bandwidth of the image data in the frequency domain, e.g. the highest frequency of the image data in the frequency domain is FmaxIt is possible to define a frequency higher than FG=FmaxThe image signal at 0.75 is a high frequency component. Because a clear image contains more frequency domain information than a blurred image and can better distinguish details, the higher the content of high-frequency components after Fourier transform of the image, namely the higher the energy of a high-frequency band, the clearer the image is.
In addition, the focused image and the defocused image obtained by the optical lens have great difference in brightness and gray level, the image definition is influenced by the brightness and gray level of the image, and the direct current component reflects the overall brightness and overall information of the image to a certain extent, so that the relatively high frequency component is preferably used for judgment. The relative high-frequency component is obtained according to the high-frequency component and the direct-current component, and can be obtained by adopting the following modes: and subtracting the energy obtained by calculating the direct current component from the energy obtained by calculating the high frequency component to obtain an energy value as the relative high frequency component.
S103: if the relative high-frequency component is the maximum value, the focal length of the optical lens does not need to be adjusted, that is, the image corresponding to the maximum relative high-frequency component is clearest.
(2) Selection algorithm of focusing window
In the above scheme, the image detector 12 is further configured to process the target image, and obtain the image data according to a gray value of each row of pixel points in a horizontal direction of the target image. Because the image texture characteristics of the part to be observed are obvious, the image has no obvious edge mutation in the vertical direction, and the gray values of the pixel points in each line in the horizontal direction are the same. Therefore, when evaluating the image definition, the image definition can be determined only by performing gradient operation or other operations on the gray values of the pixels in the horizontal direction in the image and then summing the gradient values or the relatively high frequency components of the pixels. Therefore, the focusing window can be set directly according to the gray value of the pixel point in the horizontal direction of the target image detected by the image detector. On the basis of not influencing the focusing accuracy, the number of pixel points participating in calculation is reduced to the maximum extent, and the real-time performance of the system can be greatly improved.
In addition, the image captured by the image capturing module may include the whole area covered by the buttocks, and the anus, the pudendum and the like may be used as key areas for the user to observe. Therefore, after the image acquisition module acquires the image, the central area of the image, namely the key area, can be selected as the real target part. That is, when the image is actually processed, the key region can be preferentially intercepted, and the gray value or brightness value and other parameters of the pixel points in the horizontal direction of the key region are further selected as the basis of the image processing.
In order to achieve the above effect, the image detector 12 is further configured to process the target image, determine a boundary line of a sudden change of a gray value of a pixel point in the target image, and use a region surrounded by the boundary line as a key region; and acquiring the gray value of each pixel point of the minimum operation area in the key area as the image data, wherein the minimum operation area can be selected as the area where one or more pixel points are located. In the specific calculation, an image sharpness evaluation function commonly used in the existing image processing technology, such as a sharpness evaluation function based on fast wavelet transform, can be adopted to realize the calculation. As described above, since the information of the pixels in the horizontal direction in the target image is consistent, the accuracy of the calculation result can be ensured by selecting the calculation region of the key region for calculation, and the focusing window selection strategy can reduce the calculation amount to the maximum extent.
As another implementation manner, the image detector is further configured to process the target image, and acquire a target region in the target image by using a convolutional neural network algorithm; and acquiring the gray value of each pixel point of the minimum operation area in the target part area as the image data. Based on the identification of the target boundary shape, the characteristics of perimeter, angle, curvature, width, height, diameter, area, roundness, moment and the like are mainly identified, the convolutional neural network is widely applied to the identification of the object boundary shape, and can identify the closed shape and have high identification rate for the non-closed shape. Convolutional neural networks are networks proposed to handle pattern recognition problems based on neurocognitive machines. The network is a hierarchical neural network of layers, with neurons of the same type at each layer, with very rare and fixed patterns of connections between each layer. In the training process of the convolutional neural network, when required characteristics are predetermined, a supervised algorithm is adopted, the network learns layer by layer, and otherwise, unsupervised learning is carried out.
Fig. 5 is a convolutional neural network structure. U shape0The method comprises the steps that the input layer is used, the Uc layer is used as an identification layer, the Us layer is used as a feature extraction layer, the horizontal direction shown in the figure represents a layer comprising four levels, the vertical direction represents the number of US layers and the number of Uc layers included in the layer of each level, and the levels and the numbers of the US layers and the Uc layers in each level can be adjusted in practical application. U shape0The layer Us1 is used for receiving pixel data of an original image input by an optical lens, and only extracts some relatively simple pixel features, such as extracting a general outline boundary of a target area; next, several layers of Us2 layers, Us3 layers and Us4 layers are adopted, so that the contour boundary obtained by the Us1 layer is accurately extracted, the extracted features are correspondingly increased progressively along with the increase of the number of layers, and the information such as the position, the gray level, the brightness and the like of a pixel point is accurately processed; the Uc1 layer, the Uc2 layer, the Uc3 layer, and the Uc4 layer are feature mapping layers for extracting the Us layer corresponding thereto to features restored to images, and processing is continued by the Us layer of the next stage thereof. Therefore, the key point of the convolutional neural network is to gradually improve the information extraction precision of each layer of the network so as to obtain a final processing result. Characteristic attributes (such as the positions of pixel points, the brightness values of the pixel points, the gray values of the pixel points and the like) to be extracted from each Us layer in the network are preset values, weight updating during training is based on an enhanced learning rule proposed by Fukushima, an unsupervised learning mode is adopted in the network training mode, for example, fig. 6 and 7 are partial experimental sample diagrams, samples are divided into four types of triangle, quadrangle, octagon and circle, 100 samples of each type are selected, and the total number of 800 samples are selected, and experimental training is carried out720 samples were used during the training and the remaining 80 samples were used during the testing, the final identification process and results are shown in tables 1 and 2. In the training process, all parameters related to the convolutional neural network are adjusted to ensure the accuracy of image recognition of the convolutional neural network, meanwhile, comparison is performed according to the input sample picture and the final recognition result, new influence parameters can be added in the training process, and the convolutional neural network after training can be used in the embodiment. It will be appreciated that the greater the number of samples, the more accurate the training results for the convolutional neural network.
TABLE 1 post-training network parameters
Number of stages Number of layers of US Number of features contained in US layer
First stage 18 32
Second stage 121 300
Third stage 16 280
Fourth stage 8 80
TABLE 2 identification results
Figure BDA0001785360680000131
By combining the table 2, it can be determined that the convolutional neural network is adopted to perform shape recognition of the shape target image, and the recognition rate and the anti-distortion performance are higher.
(3) Autofocus search algorithm
As shown in fig. 8, the image processor, acquiring the image signal, determining whether the focal length of the optical lens needs to be adjusted according to the image signal in combination with a preset auto-focusing model, and specifically, determining the position of the optical lens capable of acquiring the clearest image by using a successive approximation method, may include the following steps:
an initial stage: after the image signal is acquired for the first time and the relative high-frequency component is obtained, adjusting the focal length of the optical lens according to a preset step length and a preset direction;
a trend judgment stage: acquiring an image signal and a relative high-frequency component after each time of focal length adjustment is completed, and searching a peak point of the relative high-frequency component according to the variation trend of the relative high-frequency component;
and (3) a reverse regulation stage: changing the focusing direction and reducing the preset step length, and then returning to the adjusting stage;
and finally, repeating the forward adjusting stage and the backward adjusting stage until the preset step length is reduced to the lowest value, and stopping adjusting the focal length of the optical lens when a relative high-frequency component peak point appears again.
The direction of the arrow in fig. 8 is the focusing direction of the optical lens, and the distance between two adjacent dots is the step length. As can be seen from the figure, when focusing of the optical lens is started, the focusing direction is set first, and meanwhile, in order to quickly reach the focus point position, the preset step length may be set to be larger, and then, the sharpness of the image is calculated after the image is acquired, and the sharpness of the front and rear images is compared. When the peak position is crossed, the sharpness of the newly acquired image is less than that of the previous image. At the moment, the focusing direction is changed, the preset step length is reduced, the searching is carried out again by using a small step length, the searching step length is reduced, the focusing direction is repeatedly adjusted, and finally the optimal imaging position is found.
When the curve shown in fig. 8 is interfered and has a plurality of peaks, the trend judgment is improved as follows, and as shown in fig. 9, when the curve change trend is determined on the basis of the successive approximation algorithm, the curve change trend is determined not only according to the definition after two times of focusing, but also according to the change of the definition after three adjacent times of focusing. If the three definitions continuously rise, determining the curve as the rising direction; if the three definitions continuously decrease, determining the curve as a decreasing direction; three large and small increases and decreases in definition can be regarded as an increasing trend, and if three large and small increases and decreases in definition can be regarded as a decreasing trend. This avoids erroneous determination of the curve direction due to interference peaks occurring in two focusing steps.
Further preferably, in the above aspect, the image processor is further configured to:
restoring according to the image data to obtain a target image, comparing the target image with the sample image, and determining whether the part to be observed is normal or not according to a comparison result; if the part to be observed is abnormal, a prompt signal is output to remind the user. In the above scheme, some sample images of standard templates may be preset in the image processor in advance to determine whether the image of the position to be observed has abnormal information, and an instructional suggestion is provided for whether the part to be observed is normal or not through an artificial intelligence analysis function.
Example 3
The embodiment provides an intelligent toilet lid, which is equipped with the intelligent water spray rod in any one of the above embodiments.
Example 4
The present embodiment provides an irrigation system with a fused visualization function, as shown in fig. 10, including the intelligent water spray pole 100 described in any one of the above embodiments and a terminal device 200 configured with a display module, wherein: the terminal device 200 receives the image data sent by the intelligent water spray rod 100, restores the image data to the image of the part to be observed corresponding to the image data, and displays the image for the user to observe through the display module. The intelligent water spray rod 100 and the terminal device 200 can be connected through wireless communication modules such as Bluetooth, and after the pairing is successful, the image of the part to be observed, which is acquired by the image acquisition module in the intelligent water spray rod 100, can be transmitted to the terminal device 200 for displaying. The image acquisition module in the intelligent water spray rod 100 can adopt a high-definition camera, automatically focuses, adopts a definition evaluation method based on discrete cosine transform and a special focusing search algorithm, and can greatly improve the visualization quality in a dark scene environment so that the picture quality is clear. Moreover, because the intelligent water spray rod 100 is integrated with a sterilization module, the water outlet of the spray head can be effectively purified for the second time, and the sterilization capability is exerted on the sprayed water flow. Preferably, the intelligent water spray pole 100 further has an artificial intelligence analysis function, provides an instructive suggestion on whether the photographed part is normal, and can transmit the suggestion to the terminal device 200 to prompt the user.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (14)

1. The utility model provides an intelligence water spray rod, its characterized in that includes the image acquisition module, wherein:
the image acquisition module is arranged on the body of the water spray rod, and the image acquisition end surface of the image acquisition module and the spray head of the water spray rod are positioned on the same side of the body;
and the image acquisition module acquires the image of the part to be observed after responding to the visualization request signal and outputs image data obtained based on the image to the terminal equipment.
2. The intelligent water pole of claim 1, further comprising a detection module:
the detection module is used for detecting whether the spray head of the water spray rod sprays water to the cleaned part, and when the detection module detects that the spray head is switched from a water spraying state to a water spraying stopping state, the detection module outputs a visualization request signal to the image acquisition module.
3. The intelligent water spray pole of claim 1, further comprising a light emitting element and a control switch:
the light-emitting element is arranged on the body, and the power supply end of the light-emitting element is connected to a power supply through the control switch;
and the control switch is turned on after responding to the visual request signal, so that the light-emitting element emits light after obtaining electric energy.
4. The smart water wand of claim 1, wherein the image acquisition module comprises a drive mechanism, an optical lens, an image detector, and an image processor:
the optical lens is arranged on the driving output end of the driving mechanism and used for acquiring and outputting a target image of a part to be observed;
the image detector acquires the target image and outputs an image signal obtained based on the target image;
the image processor is used for acquiring the image signal and determining whether the focal length of the optical lens needs to be adjusted or not according to the image signal and a preset automatic focusing model; outputting image data obtained based on the target image if the focal length of the optical lens does not need to be adjusted; and if the focal length of the optical lens needs to be adjusted, controlling the driving mechanism to drive the optical lens to move according to a set step length.
5. The smart water wand of claim 4, wherein the preset autofocus model in the image processor is used to:
carrying out frequency domain transformation processing on the image signal to obtain frequency domain image data;
acquiring a high-frequency component and a direct-current component in the frequency domain image data; acquiring a relative high-frequency component of the image signal according to the high-frequency component and the direct-current component;
if the relative high-frequency component is the maximum value, the focal length of the optical lens does not need to be adjusted.
6. The intelligent water spray pole of claim 5, wherein:
the image detector is further used for processing the target image and obtaining the image data according to the gray value of each line of pixel points in the horizontal direction of the target image.
7. The intelligent water spray pole of claim 5, wherein:
the image detector is further used for processing the target image and determining a boundary line of the gray value mutation of the pixel points in the target image, and a region surrounded by the boundary line is used as a key region;
and acquiring the gray value of each pixel point of the minimum operation area in the key area as the image data.
8. The intelligent water spray pole of claim 5, wherein:
the image detector is also used for processing the target image and acquiring a target part area in the target image by adopting a convolutional neural network algorithm;
and acquiring the gray value of each pixel point of the minimum operation area in the target part area as the image data.
9. The intelligent water spray boom of any one of claims 5-8, wherein said image processor, acquiring said image signal, and determining whether said optical lens needs to be adjusted in focus based on said image signal in combination with a preset auto-focus model, comprises:
in the initial stage, after the image signal is acquired for the first time and the relative high-frequency component is obtained, the focal length of the optical lens is adjusted according to a preset step length and a preset direction;
a trend judgment stage, namely acquiring the image signal and the relative high-frequency component after the focal length adjustment is finished each time, and searching a peak point of the relative high-frequency component according to the variation trend of the relative high-frequency component;
a reverse adjustment stage, namely changing the focusing direction and reducing the preset step length, and then returning to the adjustment stage;
and repeating the forward adjusting stage and the backward adjusting stage until the preset step length is reduced to the lowest value, and stopping adjusting the focal length of the optical lens when a relative high-frequency component peak point appears again.
10. The smart water spray pole of claim 9, wherein the image processor is further configured to:
restoring according to the image data to obtain a target image, comparing the target image with the sample image, and determining whether the part to be observed is normal or not according to a comparison result;
if the part to be observed is abnormal, a prompt signal is output to remind the user.
11. The intelligent water spray pole of any one of claims 1-8, further comprising a sterilization module:
the degerming module is used for releasing a high-frequency electric signal to the water in the spray head so as to electrolyze partial water molecules to obtain positive and negative ions.
12. The intelligent water spray pole of claim 11, wherein:
the high-frequency electric signals released by the degerming module enable water in the spray head to have a voltage of 1-3V.
13. An intelligent toilet lid, characterized in that the toilet lid is equipped with the intelligent water spray bar of any one of claims 1-12.
14. An irrigation system integrating visualization functions, comprising the intelligent water spray pole of any one of claims 1-12 and a terminal device configured with a display module, wherein:
the terminal equipment receives the image data sent by the intelligent water spray rod, restores the image data into an image of a part to be observed corresponding to the image data, and displays the image through the display module for a user to observe.
CN201811012524.7A 2018-08-31 2018-08-31 Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function Pending CN110872861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811012524.7A CN110872861A (en) 2018-08-31 2018-08-31 Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811012524.7A CN110872861A (en) 2018-08-31 2018-08-31 Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function

Publications (1)

Publication Number Publication Date
CN110872861A true CN110872861A (en) 2020-03-10

Family

ID=69715499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811012524.7A Pending CN110872861A (en) 2018-08-31 2018-08-31 Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function

Country Status (1)

Country Link
CN (1) CN110872861A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598064A (en) * 2020-07-23 2020-08-28 杭州跨视科技有限公司 Intelligent toilet and cleaning control method thereof

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102352649A (en) * 2011-07-31 2012-02-15 刘建堂 Visual and controllable toilet bowl
CN102902131A (en) * 2011-07-28 2013-01-30 Lg伊诺特有限公司 Touch-type portable terminal
CN102979155A (en) * 2012-12-04 2013-03-20 涂国坚 After-defecation washer
CN105046260A (en) * 2015-07-31 2015-11-11 小米科技有限责任公司 Image pre-processing method and apparatus
CN105518230A (en) * 2013-09-05 2016-04-20 I·迪亚盖伊 Electronic automatically adjusting bidet with visual object recognition software
CN205473109U (en) * 2016-04-07 2016-08-17 郑银磊 High -frequency oscillation physics method combines electrolysis ionic compound method water treatment ware
US9756297B1 (en) * 2015-07-06 2017-09-05 sigmund lindsay clements Camera for viewing and sensing the health of a user sitting on a toilet
CN107574897A (en) * 2017-10-25 2018-01-12 程炽坤 Closestool framing water injector and framing water spray means
CN108222163A (en) * 2018-01-18 2018-06-29 徐道胜 The dark scrubber of human body
CN108389207A (en) * 2018-04-28 2018-08-10 上海视可电子科技有限公司 A kind of the tooth disease diagnosing method, diagnostic device and intelligent image harvester

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902131A (en) * 2011-07-28 2013-01-30 Lg伊诺特有限公司 Touch-type portable terminal
CN102352649A (en) * 2011-07-31 2012-02-15 刘建堂 Visual and controllable toilet bowl
CN102979155A (en) * 2012-12-04 2013-03-20 涂国坚 After-defecation washer
CN105518230A (en) * 2013-09-05 2016-04-20 I·迪亚盖伊 Electronic automatically adjusting bidet with visual object recognition software
US9756297B1 (en) * 2015-07-06 2017-09-05 sigmund lindsay clements Camera for viewing and sensing the health of a user sitting on a toilet
CN105046260A (en) * 2015-07-31 2015-11-11 小米科技有限责任公司 Image pre-processing method and apparatus
CN205473109U (en) * 2016-04-07 2016-08-17 郑银磊 High -frequency oscillation physics method combines electrolysis ionic compound method water treatment ware
CN107574897A (en) * 2017-10-25 2018-01-12 程炽坤 Closestool framing water injector and framing water spray means
CN108222163A (en) * 2018-01-18 2018-06-29 徐道胜 The dark scrubber of human body
CN108389207A (en) * 2018-04-28 2018-08-10 上海视可电子科技有限公司 A kind of the tooth disease diagnosing method, diagnostic device and intelligent image harvester

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
X5522288: "卷积神经网络", 《卷积神经网络 *
张招贤等: "《应用电极学》", 31 August 2005, 冶金工业出版社 *
王健: "基于图像处理的自动调焦技术研究", 《中国博士学位论文全文数据库(电子期刊)》 *
王冲: "《现代信息检索技术基本原理教程》", 30 November 2013, 西安电子科技大学出版社 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111598064A (en) * 2020-07-23 2020-08-28 杭州跨视科技有限公司 Intelligent toilet and cleaning control method thereof
CN111598064B (en) * 2020-07-23 2021-05-18 杭州跨视科技有限公司 Intelligent toilet and cleaning control method thereof

Similar Documents

Publication Publication Date Title
CN111669493B (en) Shooting method, device and equipment
CN108447017B (en) Face virtual face-lifting method and device
US8542877B2 (en) Processing images of at least one living being
EP2749202B1 (en) Automatic exposure control device and automatic exposure control method
CN109782414A (en) A kind of automatic focusing method based on no reference configuration clarity
CN110691193A (en) Camera switching method and device, storage medium and electronic equipment
WO2011050496A1 (en) Intraoral camera with liquid lens
CN106707674A (en) Automatic focusing method of projection equipment and the projection equipment
JP7010330B2 (en) Image processing system, image processing method, image processing device and program
CN106210712A (en) A kind of dead pixel points of images detection and processing method
JP6453905B2 (en) FOCUS CONTROL DEVICE, ENDOSCOPE DEVICE, AND FOCUS CONTROL DEVICE CONTROL METHOD
CN110872861A (en) Intelligent water spraying rod, intelligent toilet lid and flushing system integrated with visual function
CN111343387A (en) Automatic exposure method and device for camera equipment
JP2020537456A (en) Image sensor, image sensor and image information processing method
CN108618956A (en) Multi-point injection homogeneous pressure cleaning device and control method, application
JP2010045457A (en) Image processor, and image processing method
US20230255735A1 (en) Controlling method of water flosser, water flosser control system, and water flosser
CN116188748A (en) Image recognition system based on intelligent throat swab sampling equipment
WO2021141048A1 (en) Endoscope system, processor device, diagnosis assistance method, and computer program
Gurrala et al. Eliminating Vertical Fixed Pattern Noise in CMOS-Based Endoscopic Images using Modified Dark Frame Subtraction
CN109378047B (en) Multifunctional massage type punch bed
CN114748086B (en) CT scanning method and system, electronic device and computer readable storage medium
JP4217501B2 (en) Automatic focus adjustment device
CN114027872B (en) Ultrasonic imaging method, system and computer readable storage medium
CN113781499A (en) Medical mirror state detection method, image processing method and robot control method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200310

WD01 Invention patent application deemed withdrawn after publication