WO2022183536A1 - 一种健康检测的智能马桶及其健康检测方法 - Google Patents

一种健康检测的智能马桶及其健康检测方法 Download PDF

Info

Publication number
WO2022183536A1
WO2022183536A1 PCT/CN2021/081980 CN2021081980W WO2022183536A1 WO 2022183536 A1 WO2022183536 A1 WO 2022183536A1 CN 2021081980 W CN2021081980 W CN 2021081980W WO 2022183536 A1 WO2022183536 A1 WO 2022183536A1
Authority
WO
WIPO (PCT)
Prior art keywords
toilet
user
flow rate
urine flow
image
Prior art date
Application number
PCT/CN2021/081980
Other languages
English (en)
French (fr)
Inventor
顾红松
朱樊
郑晓英
顾海松
Original Assignee
杭州跨视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州跨视科技有限公司 filed Critical 杭州跨视科技有限公司
Publication of WO2022183536A1 publication Critical patent/WO2022183536A1/zh

Links

Images

Classifications

    • EFIXED CONSTRUCTIONS
    • E03WATER SUPPLY; SEWERAGE
    • E03DWATER-CLOSETS OR URINALS WITH FLUSHING DEVICES; FLUSHING VALVES THEREFOR
    • E03D11/00Other component parts of water-closets, e.g. noise-reducing means in the flushing system, flushing pipes mounted in the bowl, seals for the bowl outlet, devices preventing overflow of the bowl contents; devices forming a water seal in the bowl after flushing, devices eliminating obstructions in the bowl outlet or preventing backflow of water and excrements from the waterpipe
    • E03D11/02Water-closet bowls ; Bowls with a double odour seal optionally with provisions for a good siphonic action; siphons as part of the bowl
    • EFIXED CONSTRUCTIONS
    • E03WATER SUPPLY; SEWERAGE
    • E03DWATER-CLOSETS OR URINALS WITH FLUSHING DEVICES; FLUSHING VALVES THEREFOR
    • E03D9/00Sanitary or other accessories for lavatories ; Devices for cleaning or disinfecting the toilet room or the toilet bowl; Devices for eliminating smells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras

Definitions

  • the invention belongs to the technical field of intelligent bathroom detection, and particularly relates to an intelligent toilet for health detection and a health detection method thereof.
  • the toilet as a tool used by people daily, lacks the function of normalized health observation of users.
  • the user is confirmed through fingerprint recognition and visible light face recognition, but methods such as visible light face recognition and fingerprint recognition are unsanitary, lack privacy and lack health observation functions.
  • the BSFS Bridge Stool Shape Scale
  • BSFS Stel Stool Shape Scale
  • the purpose of the embodiments of this specification is to provide an intelligent toilet for health detection, which can judge the user's health status by identifying the user's feces and analyzing the urine flow.
  • an intelligent toilet for health detection including a toilet, a human body surface recognition system, a multispectral control and computational imaging system, a non-visible light source control system, a multispectral light source, an optical receiver and a binocular camera, one of which is a binocular camera.
  • the camera includes a projection objective lens, a multi-spectral light source, an optical receiver, and a projection objective lens arranged directly above the toilet.
  • the human body surface recognition system is respectively connected with the multi-spectral control and computational imaging system and the optical receiver.
  • the multi-spectral control and computational imaging system is connected with the non- The visible light source control system is connected, and the multi-spectral control and computational imaging system controls the multi-spectral light source to emit invisible light to the stool surface in the toilet through the invisible light source control system, and then enters the optical receiver after being reflected by the surface of the object.
  • the visible light is converted into electrical signals and sent to the human body surface recognition system and the multi-spectral control and computational imaging system to identify the user and locate the target cleaning part of the user.
  • Shooting is performed, and the captured fecal image signal in the toilet is sent to the multispectral control and computational imaging system to process and analyze the image of the measured object.
  • the binocular cameras are set on both sides of the toilet, the two cameras are placed at right angles, facing away from the user, to capture the user's urination frame images to analyze the urodynamics and analyze the urine composition through multi-spectral, the binocular camera is combined with the multi-spectral control and the The computational imaging system is connected to process the urination frame images.
  • a health detection method for a smart toilet which judges the user's health status by identifying the feces in the toilet and detecting the urine flow, respectively.
  • the process of judging the user's health status through the recognition of the feces in the toilet is as follows: emitting invisible light to the surface of the feces in the toilet; receiving the invisible light reflected by the surface of the object and converting it into an electrical signal; locating and identifying the user's physical signs; receiving The non-visible light from the feces in the toilet is taken and photographed; the fecal image signal in the toilet is sent to the multi-spectral control and computational imaging system; according to the recognition result of the human body surface recognition system, the electrical signal of the optical receiver and the micro-display surface of the toilet are combined.
  • the image data with 3D information is obtained by combining with the fecal image signal of the system; the IMD neural network model is used to realize the location and recognition of physical signs, identify the detected objects in the toilet, and calculate the health change of the user;
  • the specific process of judging the user's health status by detecting the urine flow is as follows: capturing and analyzing the user's urination frame image; capturing the user's urine flow; synchronizing the image frames; depth estimation, using geometric calculation from two synchronized frames Carry out depth estimation; flow rate estimation and correction; measure urine flow rate, draw the measured urine flow rate per second and draw a curve to calculate various urine flow rate parameters according to the urine flow rate curve, including maximum urine flow rate, urine flow rate Time, average urine flow rate, maximum urine flow rate time, 2-second urine flow rate and total urine volume; check whether the urination function is normal, and determine whether the user's urinary tract is blocked.
  • the present invention uses the non-visible light sensing technology to identify the sign user.
  • the automatic identification technology of excrement is applied to carry out efficient daily observation of the user's health, timely discover hidden health risks, and realize the health observation function of sanitary products.
  • the present invention uses non-visible light sensing technology to ensure privacy.
  • the IMD neural network model of the present invention adopts a plurality of smaller one-dimensional convolution kernels to extract image features of different scales of the image and fuses them to obtain a richer spatial information feature map, and uses the feature maps of different scales for detection, which is suitable for various Detection of objects of various sizes.
  • the present invention uses cloud computing and/or edge computing and corresponding AI chips, which avoids time consumption caused by data transmission and further protects user privacy.
  • Fig. 1 is a uroflowmeter figure in the prior art
  • FIG. 2 is a schematic diagram of a non-visible light sign sensing technology according to Embodiment 1 of this specification;
  • FIG. 3 is a detection diagram of the non-visible light sign sensing technology according to the first embodiment of the specification
  • FIG. 4 is a schematic diagram of an AI module/chip according to Embodiment 1 of this specification.
  • FIG. 5 is a schematic diagram of a fecal state classification and recognition process according to Embodiment 1 of this specification;
  • FIG. 6 is a schematic diagram of the IMD neural network model of the first embodiment of the specification.
  • FIG. 8 is a schematic diagram of the binocular stereo vision depth measurement according to the second embodiment of the specification.
  • a smart toilet for health detection provided in Embodiment 1 of this specification includes a toilet, a human body surface recognition system, a multi-spectral control and computational imaging system, a non-visible light source control system, a multi-spectral light source, and an optical receiver. and a binocular camera, one of the cameras of the binocular camera includes a projection objective lens, a multispectral light source, an optical receiver, and a projection objective lens are arranged directly above the toilet, and the human body surface recognition system is respectively connected with the multispectral control and computational imaging system and the optical receiver.
  • the multi-spectral control and computational imaging system is connected with the non-visible light source control system, and the multi-spectral control and computational imaging system controls the multi-spectral light source to emit non-visible light to the surface of the feces in the toilet through the non-visible light source control system.
  • the receiver, the optical receiver converts the received non-visible light into electrical signals and sends it to the human body surface recognition system and the multi-spectral control and computational imaging system to identify the user and locate the user's target cleaning part, and the projection objective lens receives the signal from the toilet.
  • the non-visible light of feces is photographed through the micro display surface, and the feces picture signals in the toilet obtained by shooting are sent to the multi-spectral control and computational imaging system to process and analyze the image of the measured object;
  • the binocular cameras are set on both sides of the toilet, the two cameras are placed at right angles, facing away from the user, to capture the user's urination frame images to analyze the urodynamics and analyze the urine composition through multi-spectral, the binocular camera is combined with the multi-spectral control and the The computational imaging system is connected to process the urination frame images.
  • the human body surface recognition system is used to identify the human body, and each health check locates and identifies the target clean part of the human body, and performs health detection;
  • the multi-spectral control and computational imaging system is used to combine the electrical signal of the optical receiver and the feces picture signal in the toilet on the micro display surface according to the recognition result of the human body surface recognition system to obtain picture data with 3D information.
  • the image data of the information realizes the localization and recognition of the signs through the IMD (Inception-Multibox Detector) neural network model;
  • the multi-spectral light source is a non-visible infrared (IR) multi-spectral light source, which is used to emit specially modulated non-visible infrared light to the feces;
  • IR non-visible infrared
  • the optical receiver is a non-visible infrared (IR) receiver, which is used to receive the non-visible infrared light reflected by the feces and provide the 3D spatial information of the feces;
  • IR non-visible infrared
  • the projection objective lens adopts a common lens module, and uses non-visible light to shoot to obtain 2D picture data.
  • the present invention uses non-visible light sensing technology, and the projection objective lens is used for capturing 2D picture data, thereby ensuring privacy.
  • the multi-spectral control and computational imaging system integrates the 2D picture data captured by the ordinary lens module and the 3D spatial information obtained by the non-visible light infrared receiver, and obtains the picture data with 3D information of the feces after algorithm processing.
  • each perceptual module includes multiple convolutional layers and max-pooling layers using filters of different sizes
  • each feature map size reduction module includes multiple convolutional layers and average pooling layers
  • the stitching layer fuses different scales , connect additional convolutional layers to predict bounding box, confidence and label class, perform non-maximum suppression on the resulting prediction, finally output bounding box, confidence and class and achieve 3D localization and user identification.
  • the IMD neural network model has the following two advantages: first, multiple smaller one-dimensional convolution kernels are used to extract image features of different scales of the image for fusion to obtain a richer spatial information feature map; second, using different The feature map of the scale is used for detection, which is suitable for object detection of various sizes.
  • the detected objects in the toilet are identified.
  • the identification results include clean, excrement and toilet paper.
  • the excrement is classified into constipation, normal and diarrhea.
  • BSFS Stel stool shape table
  • the excrement is divided into 7 grades (BS1-7), wherein the 1-2 grade (BS1-2) is constipation, the 3-5 grade (BS3-5) is normal, Grades 6-7 (BS6-7) are diarrhea.
  • a smart toilet for health detection provided in Embodiment 2 of this specification includes a toilet, a human body surface recognition system, a multi-spectral control and computational imaging system, a non-visible light source control system, a multi-spectral light source, and an optical receiver. and a binocular camera, one of the cameras of the binocular camera includes a projection objective lens, a multispectral light source, an optical receiver, and a projection objective lens are arranged directly above the toilet, and the human body surface recognition system is respectively connected with the multispectral control and computational imaging system and the optical receiver.
  • the multi-spectral control and computational imaging system is connected with the non-visible light source control system, and the multi-spectral control and computational imaging system controls the multi-spectral light source to emit non-visible light to the surface of the feces in the toilet through the non-visible light source control system.
  • the receiver, the optical receiver converts the received non-visible light into electrical signals and sends it to the human body surface recognition system and the multi-spectral control and computational imaging system to identify the user and locate the user's target cleaning part, and the projection objective lens receives the signal from the toilet.
  • the non-visible light of feces is photographed through the micro display surface, and the fecal image signal in the toilet obtained by shooting is sent to the multi-spectral control and computational imaging system to process and analyze the image of the measured object.
  • the binocular cameras are set on both sides of the toilet, and the two lenses of the binocular cameras are placed at right angles, facing away from the user, to capture the user's urination frame images to analyze the urine dynamics and analyze the urine composition through multi-spectral analysis.
  • the multispectral control is connected with the computational imaging system to process the urination frame images,
  • the urination frame image is processed as follows:
  • Step 1 Capture the urine flow of the user falling into the wide-angle camera FOV of the binocular camera, specifically:
  • Step 1.1 wide-angle FOV frame distortion correction
  • Step 1.2 black and white conversion and smoothing
  • Step 1.3 background subtraction to extract the urine flow area in the frame
  • Step 2 Synchronize the image frame of the binocular camera
  • Step 3 Depth estimation, since a single camera lacks depth information, use geometric calculation to perform depth estimation from two synchronized frames;
  • the depth estimation in step 3 adopts binocular stereo vision depth estimation, specifically:
  • Step 3.1 calibrate the binocular camera to obtain the internal and external parameters and homography matrix of the binocular camera
  • Step 3.2 correct the original image according to the calibration result, the two corrected images are located on the same plane and parallel to each other;
  • Step 3.3 Perform pixel point matching on the two corrected images
  • the matching formula is as follows:
  • x, y are the actual positions
  • x l , y l are the pixel positions of the image captured by the left camera of the binocular camera
  • x r , y r are the pixel positions of the image captured by the right camera of the binocular camera
  • Step 3.4 Calculate the depth of each pixel according to the matching result.
  • the depth formula is as follows:
  • f is the focal length of the binocular camera
  • b is the distance between the left and right cameras of the binocular camera.
  • Step 4 Flow velocity estimation and correction, specifically:
  • Step 4.1 Perform wrong measurement of urine output according to changes in flow rate
  • Step 4.2 Use two ROIs (Region of Interest, observation area) in the image frame to estimate the flow velocity;
  • Step 4.3 Flow Rate Rate correction is performed by dividing the sum of the depth-corrected pixel values by the frame offset;
  • Step 5 measure the urine flow rate, and draw the urine flow rate per second obtained by the measurement of steps 1-4 into a curve Calculate each urine flow rate parameter according to the urine flow rate curve, including the maximum urine flow rate, the urine flow time , average urine flow rate, maximum urine flow rate time, 2-second urine flow rate and total urine volume; check whether the urination function is normal, and determine whether the user's urinary tract is blocked.
  • cloud computing and/or edge computing are used for user sign location recognition, image processing of measured objects, and urination frame image processing.
  • image data is transmitted to the cloud, and cloud computing is used for image recognition based on a deep learning framework.
  • Model training After the model training is completed, as shown in Figure 4, the model is embedded in the edge AI chip; in the real-time recognition stage, faster and safer edge computing is used, and the image recognition process will be performed in the edge AI chip, avoiding data transmission. It brings time consumption and better protection of user privacy.
  • a health detection method for a smart toilet which judges the health status of a user by identifying the feces in the toilet and detecting the urine flow, respectively.
  • Step A1 the multi-spectral control and computational imaging system controls the multi-spectral light source to emit invisible light to the stool surface in the toilet through the invisible light source control system;
  • Step A2 entering the optical receiver after being reflected on the surface of the object, and the optical receiver converts the received invisible light into an electrical signal and sends it to the human body surface recognition system and the multispectral control and computational imaging system;
  • Step A3 the human body surface recognition system locates and recognizes the user's physical signs
  • Step A4 the projection objective lens receives the invisible light from the feces in the toilet, and shoots through the micro-display surface;
  • Step A5 sending the feces picture signal in the toilet obtained by the micro-display surface to the multi-spectral control and computational imaging system;
  • Step A6 the multi-spectral control and computational imaging system combines and processes the electrical signal of the optical receiver and the fecal image signal in the toilet on the micro-display surface according to the recognition result of the human body surface recognition system to obtain picture data with 3D information;
  • Step A7 according to the picture data with 3D information, through the IMD (perceptual multi-window detector, Inception-Multibox Detector) neural network model to realize the location and recognition of physical signs;
  • IMD perceptual multi-window detector, Inception-Multibox Detector
  • Step A8 Identify the detected objects in the toilet based on the IMD neural network model.
  • the identification results include clean, excrement and toilet paper.
  • the excrement is classified, and then summarized into constipation, normal and diarrhea.
  • BSFS Stel Stool Shape Scale
  • the identification of the IMD neural network model in step A8 is specifically: inputting the image to be detected, there are 5 convolutional layers and 2 pooling layers at the front end of the neural network model, and then connecting 3 sensing modules and 2 to reduce the size of the feature map Modules, 2 reduced feature map size modules are set in the interval of 3 perception modules, stitching layers are set between 3 perception modules and 2 reduced feature map size modules, each perception module includes multiple volumes using filters of different sizes Convolutional layers and max-pooling layers, each reducing feature map size module includes multiple convolutional layers and average pooling layers, concatenating layers to fuse features of different scales, connecting additional convolutional layers to predict bounding boxes, confidence and label classes , perform non-maximum suppression on the resulting predictions, and finally output bounding boxes, confidences, and categories and achieve 3D localization and user identification.
  • the IMD neural network model has the following two advantages: first, multiple smaller one-dimensional convolution kernels are used to extract image features of different scales of the image for fusion to obtain a richer spatial information feature map; second, using different The feature map of the scale is used for detection, which is suitable for object detection of various sizes.
  • the detected objects in the toilet are identified.
  • the identification results include clean, excrement and toilet paper.
  • the excrement is classified into constipation, normal and diarrhea.
  • BSFS Stel stool shape table
  • the excrement is divided into 7 grades (BS1-7), among which, the 1st-2nd grade (BS1-2) is constipation, the 3rd-5th grade (BS3-5) is normal, and the 3rd-5th grade (BS3-5) is normal.
  • a 6-7 grade (BS6-7) is diarrhea.
  • Step B1 the binocular camera captures and analyzes the urination frame image of the user
  • Step B2 capturing the urine flow of the user falling into the wide-angle camera FOV of the binocular camera, specifically:
  • Step B2.1 wide-angle FOV frame distortion correction
  • Step B2.2 black and white conversion and smoothing
  • Step B2.3 background subtraction to extract the urine flow area in the frame
  • Step B3 synchronizing the image frame of the binocular camera
  • Step B4 depth estimation, since a single camera lacks depth information, use geometric calculation to perform depth estimation from two synchronized frames;
  • the depth estimation in step B4 adopts binocular stereo vision depth estimation, specifically:
  • Step B4.1 calibrate the binocular camera to obtain the internal and external parameters and homography matrix of the binocular camera
  • Step B4.2 correcting the original image according to the calibration result, and the two corrected images are located on the same plane and parallel to each other;
  • Step B4.3 performing pixel point matching on the two corrected images
  • the matching formula is as follows:
  • x, y are the actual positions
  • x l , y l are the pixel positions of the image captured by the left camera of the binocular camera
  • x r , y r are the pixel positions of the image captured by the right camera of the binocular camera
  • Step B4.4 Calculate the depth of each pixel according to the matching result.
  • the depth formula is as follows:
  • f is the focal length of the binocular camera
  • b is the distance between the left and right cameras of the binocular camera
  • Step B5 flow velocity estimation and correction, specifically:
  • Step B5.1 carry out wrong measurement of urine output according to the change of flow rate
  • Step B5.2 using two ROIs (Region of Interest, observation area) in the image frame to estimate the flow velocity
  • Step B5.3 Flow Rate Perform rate correction by dividing the sum of the depth-corrected pixel values by the frame offset
  • Step B6 measure the urine flow rate, the urine flow rate per second obtained by the measurement of step B2-5 is drawn into a curve and calculates each urine flow rate parameter according to the urine flow rate curve, including the maximum urine flow rate, the urine flow time , average urine flow rate, maximum urine flow rate time, 2-second urine flow rate and total urine volume; check whether the urination function is normal, and determine whether the user's urinary tract is blocked.
  • the detection method performed by the smart toilet for health detection disclosed in the above embodiments shown in this specification may be applied to the processor, or implemented by the processor.
  • a processor is an integrated circuit chip that has the ability to process signals.
  • each step of the above method can be implemented by an integrated logic circuit of hardware in a processor or, of course, in addition to software implementation, the electronic device in the embodiments of this specification does not exclude other implementations, such as logic devices or software.
  • the manner of combining hardware, etc., that is to say, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic device.
  • a typical implementation device is a computer.
  • the computer can be, for example, a personal computer, a notebook computer, a mobile phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or any of these devices. combination of equipment.
  • Computer-readable media includes both persistent and non-permanent, removable and non-removable media, and storage of information may be implemented by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • computer-readable media does not include transitory computer-readable media, such as modulated data signals and carrier waves.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Hydrology & Water Resources (AREA)
  • Artificial Intelligence (AREA)
  • Water Supply & Treatment (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Epidemiology (AREA)
  • Bidet-Like Cleaning Device And Other Flush Toilet Accessories (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Investigating Or Analysing Biological Materials (AREA)
  • Image Analysis (AREA)

Abstract

一种健康检测的智能马桶及其健康检测方法,智能马桶包括马桶、人体表面识别系统、多光谱控制与计算成像系统、非可见光光源控制系统、多光谱光源、光学接收器、投影物镜以及双目摄像机,光学接收器将接收的非可见光转换为电信号发送至人体表面识别系统和多光谱控制与计算成像系统,对用户进行识别并对用户目标清洁部位进行定位,投影物镜拍摄得到的马桶内的粪便图片信号发送至多光谱控制与计算成像系统,对被测物体的图像进行处理分析;双目摄像机捕获和分析用户的排尿帧图像,对排尿帧图像进行处理。

Description

一种健康检测的智能马桶及其健康检测方法 技术领域
本发明属于智能卫浴检测技术领域,特别涉及一种健康检测的智能马桶及其健康检测方法。
背景技术
目前马桶作为一个人们日常使用的工具,缺乏对用户进行常态化健康观察的功能。现有技术中,通过指纹识别和可见光人脸识别对用户进行确认,但是可见光人脸识别、指纹识别等方法不卫生、没有隐私性缺乏健康观察功能。医院中,如图1所示,使用机械式的尿流计需要人工操作。通过粪便的形状观察健康状态的有BSFS(布里斯托粪便形状表)方法。BSFS(布里斯托粪便形状表)是人粪便排便后出现形式的分类需要医生有肉眼观察。准确性不够,操作不方便。
发明内容
本说明书实施例的目的是提提供一种健康检测的智能马桶,通过对用户的粪便识别和尿流分析,判断用户的健康状况,操作简单,不需要人工,并且保护了用户的隐私。
为解决上述技术问题,本说明书实施例通过以下方式实现的:
一方面提供一种健康检测的智能马桶,包括马桶、人体表面识别系统、多光谱控制与计算成像系统、非可见光光源控制系统、多光谱光源、光学接收器以及双目摄像机,双目摄像机其中一个摄像机包括投影物镜,多光谱光源、光学接收器、投影物镜设置在马桶的正上方,人体表面识别系统分别与多光谱控制与计算成像系统和光学接收器连接,多光谱控制与计算成像系统与非可见光光源控制系统连接,多光谱控制与计算成像系统通过非可见光光源控制系统控制多光谱光源发射非可见光至马桶内的粪便表面,经过物体表面反射后进入光学接收器,光学接收器将接收的非可见光转换为电信号发送至人体表面识别系统和多光谱控制与计算成像系统,对用户进行识别并对用户目标清洁部位进行定位,投影物镜接收来自马桶内的粪便的非可见光,并通过微显示面进行拍摄,拍摄得到的马桶内的粪便图片信号发送至多光谱控制与计算成像系统,对被测物体的图像进行处理分析。
双目摄像机设置在马桶的两侧,两个摄像机以直角放置,背对用户,以捕获用户的排尿帧图像分析尿液动力学并通过多光谱分析尿液成分,双目摄像机与多光谱控制与计算成像系统连接对排尿帧图像进行处理。
另一方面提供一种智能马桶的健康检测方法,分别通过对马桶内的粪便识别和尿流的检测对用户的健康状况进行判断,
其中,通过马桶内的粪便识别对用户的健康状况进行判断的过程如下:
其中,通过马桶内的粪便识别对用户的健康状况进行判断的过程如下:发射非可见光至马桶内的粪便表面;接收经过物体表面反射后的非可见光转换为电信号;对用户体征定位识别;接收来自马桶内的粪便的非可见光,并进行拍摄;马桶内的粪便图片信号发送至多光谱控制与计算成像系统;根据人体表面识别系统识别结果,将光学接收器的电信号和微显示面的马桶内的粪便图片信号结合处理得到具备3D信息的图片数据;通过IMD神经网络模型实现对体征定位识别,对马桶内检测物进行识别,推算出用户的健康变化;
其中,通过对尿流的检测对用户的健康状况进行判断的具体过程如下:捕获和分析用户的排尿帧图像;捕获用户的尿流;同步图像帧;深度估计,使用几何计算从两个同步帧进行深度估计;流速估算和校正;测量尿流率,将测量得到的每秒钟的尿流率并绘成曲线根据尿流率曲线推算出各尿流率参数,包括最大尿流率、尿流时间、平均尿流率、最大尿流率时间、2秒钟尿流率及总尿量;检查排尿功能是否正常,并判断用户尿路有无梗阻。
本发明通过非可见光传感技术进行体征用户认定。应用排泄物的自动识别技术,对用户的健康进行高效率的日常观察,及时发现健康的隐患,实现卫浴产品的健康观察功能。本发明为了保证隐私而使用非可见光传感技术,保证了隐私。本发明的IMD神经网络模型采用多个较小的一维卷积核提取图像不同尺度的图像特征进行融合得到更加丰富的空间信息特征图,并且使用不同尺度的特征图来做检测,适用于各种大小的物体检测。本发明使用云计算和/或边缘计算以及相应的AI芯片,避免了数据传输带来的时间消耗和更加保护了用户隐私。
附图说明
为了更清楚地说明本说明书实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本说明书中记载的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为现有技术中尿流计图;
图2为本说明书的实施例一的非可见光体征传感技术示意图;
图3为本说明书的实施例一的非可见光体征传感技术探测图;
图4为本说明书的实施例一的AI模块/芯片示意图;
图5为本说明书的实施例一的粪便状态分类识别过程原理图;
图6为本说明书的实施例一的IMD神经网络模型原理图;
图7为本说明书的实施例二基于双目摄像机的高帧率尿流分析;
图8为本说明书的实施例二的双目立体视觉深度测量原理图;
具体实施方式
为了使本技术领域的人员更好地理解本说明书中的技术方案,下面将结合本说明书实施例中的附图,对本说明书实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本说明书一部分实施例,而不是全部的实施例。基于本说明书中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都应当属于本说明书保护的范围。
实施例一
参照图2所示,本说明书实施例一提供的一种健康检测的智能马桶,包括马桶、人体表面识别系统、多光谱控制与计算成像系统、非可见光光源控制系统、多光谱光源、光学接收器以及双目摄像机,双目摄像机其中一个摄像机包括投影物镜,多光谱光源、光学接收器、投影物镜设置在马桶的正上方,人体表面识别系统分别与多光谱控制与计算成像系统和光学接收器连接,多光谱控制与计算成像系统与非可见光光源控制系统连接,多光谱控制与计算成像系统通过非可见光光源控制系统控制多光谱光源发射非可见光至马桶内的粪便表面,经过物体表面反射后进入光学接收器,光学接收器将接收的非可见光转换为电信号发送至人体表面识别系统和多光谱控制与计算成像系统,对用户进行识别并对用户目标清洁 部位进行定位,投影物镜接收来自马桶内的粪便的非可见光,并通过微显示面进行拍摄,拍摄得到的马桶内的粪便图片信号发送至多光谱控制与计算成像系统,对被测物体的图像进行处理分析;
双目摄像机设置在马桶的两侧,两个摄像机以直角放置,背对用户,以捕获用户的排尿帧图像分析尿液动力学并通过多光谱分析尿液成分,双目摄像机与多光谱控制与计算成像系统连接对排尿帧图像进行处理。
其中,人体表面识别系统用于对人体进行识别,每次的健康检测对人体的目标清洁部位进行定位识别,进行健康检测;
其中,多光谱控制与计算成像系统用于根据人体表面识别系统识别结果,将光学接收器的电信号和微显示面的马桶内的粪便图片信号结合处理得到具备3D信息的图片数据,根据具备3D信息的图片数据通过IMD(感知多窗检测器,Inception-Multibox Detector)神经网络模型实现对体征定位识别;
其中,多光谱光源为非可见光红外线(IR)多光谱光源,用于发射经过特殊调制的非可见红外光至的粪便;
其中,光学接收器为非可见光红外线(IR)接收器,用于接收由的粪便反射回来的非可见红外光,提供的粪便的3D空间信息;
其中,投影物镜采用普通镜头模组,使用非可见光进行拍摄得到2D图片数据。
本发明为了保证隐私而使用非可见光传感技术,投影物镜用于2D图片数据拍摄,保证了隐私。
其中,多光谱控制与计算成像系统将普通镜头模组拍摄的2D图片数据和非可见光红外线接收器获取的3D空间信息集合,经算法处理得到的粪便的具备3D信息的图片数据。
其中,参照图5、6所示,使用IMD(感知多窗检测器,Inception-Multibox Detector)神经网络模型进行识别具体为:输入待检测图像,在神经网络模型前端有5个卷积层和2个池化层,随后连接3个感知模块和2个降低特征图尺寸模块,2个降低特征图尺寸模块设置在3个感知模块间隔中,3个感知模块和2个 降低特征图尺寸模块之间设置拼接层,每个感知模块包括多个使用不同尺寸滤波器的卷积层和最大池化层,每个降低特征图尺寸模块包括多个卷积层和平均池化层,拼接层融合不同尺度的特征,连接额外的卷积层预测边界框、置信度和标签类,对结果预测执行非极大值抑制,最终输出边界框,置信度和类别并实现了3D定位和使用者识别。
IMD神经网络模型有以下两个优势特点:第一点,采用多个较小的一维卷积核提取图像不同尺度的图像特征进行融合得到更加丰富的空间信息特征图;第二点,使用不同尺度的特征图来做检测,适用于各种大小的物体检测。
其中,基于IMD神经网络模型,对马桶内检测物进行识别,识别结果包括干净、排泄物以及厕纸,排泄物进行分类,进而归纳为便秘,正常和腹泻,通过每天自动记录和分类,按照BSFS(布里斯托粪便形状表)推算出用户的健康变化;
进一步的,如图5所示,排泄物划分为7个等级(BS1-7),其中,第1-2等级(BS1-2)为便秘,第3-5等级(BS3-5)为正常,第6-7等级(BS6-7)为腹泻。
实施例二
参照图7所示,本说明书实施例二提供的一种健康检测的智能马桶,包括马桶、人体表面识别系统、多光谱控制与计算成像系统、非可见光光源控制系统、多光谱光源、光学接收器以及双目摄像机,双目摄像机其中一个摄像机包括投影物镜,多光谱光源、光学接收器、投影物镜设置在马桶的正上方,人体表面识别系统分别与多光谱控制与计算成像系统和光学接收器连接,多光谱控制与计算成像系统与非可见光光源控制系统连接,多光谱控制与计算成像系统通过非可见光光源控制系统控制多光谱光源发射非可见光至马桶内的粪便表面,经过物体表面反射后进入光学接收器,光学接收器将接收的非可见光转换为电信号发送至人体表面识别系统和多光谱控制与计算成像系统,对用户进行识别并对用户目标清洁部位进行定位,投影物镜接收来自马桶内的粪便的非可见光,并通过微显示面进行拍摄,拍摄得到的马桶内的粪便图片信号发送至多光谱控制与计算成像系统,对被测物体的图像进行处理分析
双目摄像机设置在马桶的两侧,双目摄像机的两个镜头以直角放置,背对用户,以捕获用户的排尿帧图像分析尿液动力学并通过多光谱分析尿液成分,双目摄像机与多光谱控制与计算成像系统连接对排尿帧图像进行处理,
排尿帧图像进行处理过程如下:
步骤1、捕获落入双目摄像机的广角摄像头FOV的用户的尿流,具体为:
步骤1.1、广角FOV帧变形矫正;
步骤1.2、黑色和白色转换和平滑;
步骤1.3、背景减去提取帧内尿流区域;
步骤2、同步双目摄像机的图像帧;
步骤3、深度估计,由于单个摄像头缺少深度信息,使用几何计算从两个同步帧进行深度估计;
其中,如图8所示,步骤3的深度估计采用双目立体视觉深度估计,具体为:
步骤3.1、对双目摄像机进行标定,得到双目摄像机的内外参数、单应矩阵;
步骤3.2、根据标定结果对原始图像校正,校正后的两张图像位于同一平面且互相平行;
步骤3.3、对校正后的两张图像进行像素点匹配;
匹配公式如下:
Figure PCTCN2021081980-appb-000001
其中,x,y是实际位置,x l,y l是双目摄像机左侧摄像头捕捉图像的像素点位置,x r,y r是双目摄像机右侧摄像头捕捉图像的像素点位置,
步骤3.4、根据匹配结果计算每个像素的深度,深度公式如下:
Figure PCTCN2021081980-appb-000002
其中,f是双目摄像机摄像头的焦距,b为双目摄像机左右摄像头之间的距离。
步骤4、流速估算和校正,具体为:
步骤4.1、根据流速变化对尿量进行错误测量;
步骤4.2、使用图像帧内的两个ROI(Region of Interest,观察区域)估算流速;
步骤4.3、流速通过将深度校正后的像素值的总和除以帧偏移来执行速率校正;
步骤5、测量尿流率,将步骤1-4的测量得到的每秒钟的尿流率并绘成曲线根据尿流率曲线推算出各尿流率参数,包括最大尿流率、尿流时间、平均尿流率、最大尿流率时间、2秒钟尿流率及总尿量;检查排尿功能是否正常,并判断用户尿路有无梗阻。
其中,用户体征定位识别、被测物体的图像处理和排尿帧图像处理均采用云计算和/或边缘计算,模型训练阶段,将图像数据传到云端,使用云计算进行基于深度学习框架的图像识别模型训练,模型训练完成后,如图4所示,将模型嵌入边缘AI芯片中;实时识别阶段,使用更加快捷和安全的边缘计算,图像识别过程将在边缘AI芯片中进行,避免了数据传输带来的时间消耗和更加保护了用户隐私。
实施例三
一种智能马桶的健康检测方法,分别通过对马桶内的粪便识别和尿流的检测对用户的健康状况进行判断,
其中,通过马桶内的粪便识别对用户的健康状况进行判断的过程如下:
步骤A1、多光谱控制与计算成像系统通过非可见光光源控制系统控制多光谱光源发射非可见光至马桶内的粪便表面;
步骤A2、经过物体表面反射后进入光学接收器,光学接收器将接收的非可见光转换为电信号发送至人体表面识别系统和多光谱控制与计算成像系统;
步骤A3、人体表面识别系统对用户体征定位识别;
步骤A4、投影物镜接收来自马桶内的粪便的非可见光,并通过微显示面进行拍摄;
步骤A5、微显示面拍摄得到的马桶内的粪便图片信号发送至多光谱控制与计算成像系统;
步骤A6、多光谱控制与计算成像系统根据人体表面识别系统识别结果,将光学接收器的电信号和微显示面的马桶内的粪便图片信号结合处理得到具备3D信息的图片数据;
步骤A7、根据具备3D信息的图片数据通过IMD(感知多窗检测器,Inception-Multibox Detector)神经网络模型实现对体征定位识别;
步骤A8、基于IMD神经网络模型,对马桶内检测物进行识别,识别结果包括干净、排泄物以及厕纸,排泄物进行分类,进而归纳为便秘,正常和腹泻,通过每天自动记录和分类,按照BSFS(布里斯托粪便形状表)推算出用户的健康变化.
其中,步骤A8的IMD神经网络模型进行识别具体为:输入待检测图像,在神经网络模型前端有5个卷积层和2个池化层,随后连接3个感知模块和2个降低特征图尺寸模块,2个降低特征图尺寸模块设置在3个感知模块间隔中,3个感知模块和2个降低特征图尺寸模块之间设置拼接层,每个感知模块包括多个使用不同尺寸滤波器的卷积层和最大池化层,每个降低特征图尺寸模块包括多个卷 积层和平均池化层,拼接层融合不同尺度的特征,连接额外的卷积层预测边界框、置信度和标签类,对结果预测执行非极大值抑制,最终输出边界框,置信度和类别并实现了3D定位和使用者识别。
IMD神经网络模型有以下两个优势特点:第一点,采用多个较小的一维卷积核提取图像不同尺度的图像特征进行融合得到更加丰富的空间信息特征图;第二点,使用不同尺度的特征图来做检测,适用于各种大小的物体检测。
其中,基于IMD神经网络模型,对马桶内检测物进行识别,识别结果包括干净、排泄物以及厕纸,排泄物进行分类,进而归纳为便秘,正常和腹泻,通过每天自动记录和分类,按照BSFS(布里斯托粪便形状表)推算出用户的健康变化;
其中,如图5所示,排泄物划分为7个等级(BS1-7),其中,第1-2等级(BS1-2)为便秘,第3-5等级(BS3-5)为正常,第6-7等级(BS6-7)为腹泻。
其中,通过对尿流的检测对用户的健康状况进行判断的具体过程如下:
步骤B1、双目摄像机捕获和分析用户的排尿帧图像;
步骤B2、捕获落入双目摄像机的广角摄像头FOV的用户的尿流,具体为:
步骤B2.1、广角FOV帧变形矫正;
步骤B2.2、黑色和白色转换和平滑;
步骤B2.3、背景减去提取帧内尿流区域;
步骤B3、同步双目摄像机的图像帧;
步骤B4、深度估计,由于单个摄像头缺少深度信息,使用几何计算从两个同步帧进行深度估计;
其中,如图8所示,步骤B4的深度估计采用双目立体视觉深度估计,具体为:
步骤B4.1、对双目摄像机进行标定,得到双目摄像机的内外参数、单应矩阵;
步骤B4.2、根据标定结果对原始图像校正,校正后的两张图像位于同一平面且互相平行;
步骤B4.3、对校正后的两张图像进行像素点匹配;
匹配公式如下:
Figure PCTCN2021081980-appb-000003
其中,x,y是实际位置,x l,y l是双目摄像机左侧摄像头捕捉图像的像素点位置,x r,y r是双目摄像机右侧摄像头捕捉图像的像素点位置;
步骤B4.4、根据匹配结果计算每个像素的深度,深度公式如下:
Figure PCTCN2021081980-appb-000004
其中,f是双目摄像机摄像头的焦距,b为双目摄像机左右摄像头之间的距离;
步骤B5、流速估算和校正,具体为:
步骤B5.1、根据流速变化对尿量进行错误测量;
步骤B5.2、使用图像帧内的两个ROI(Region of Interest,观察区域)估算流速;
步骤B5.3、流速通过将深度校正后的像素值的总和除以帧偏移来执行速率校正;
步骤B6、测量尿流率,将步骤B2-5的测量得到的每秒钟的尿流率并绘成曲线根据尿流率曲线推算出各尿流率参数,包括最大尿流率、尿流时间、平均尿流率、最大尿流率时间、2秒钟尿流率及总尿量;检查排尿功能是否正常,并判断用户尿路有无梗阻。
上述如本说明书所示实施例揭示的健康检测的智能马桶执行的检测方法可以应用于处理器中,或者由处理器实现。处理器是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或当然,除了软件实现方式之外,本说明书实施例的电子设备并不排除 其他实现方式,比如逻辑器件抑或软硬件结合的方式等等,也就是说以下处理流程的执行主体并不限定于各个逻辑单元,也可以是硬件或逻辑器件。
总之,以上所述仅为本说明书的较佳实施例而已,并非用于限定本说明书的保护范围。凡在本说明书的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本说明书的保护范围之内。
上述实施例阐明的系统、装置、模块或单元,具体可以由计算机芯片或实体实现,或者由具有某种功能的产品来实现。一种典型的实现设备为计算机。具体的,计算机例如可以为个人计算机、笔记本电脑、行动电话、智能电话、个人数字助理、媒体播放器、导航设备、电子邮件设备、游戏控制台、平板计算机、可穿戴设备或者这些设备中的任何设备的组合。
计算机可读介质包括永久性和非永久性、可移动和非可移动媒体可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括,但不限于相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。按照本文中的界定,计算机可读介质不包括暂存电脑可读媒体(transitory media),如调制的数据信号和载波。
还需要说明的是,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。

Claims (10)

  1. 一种健康检测的智能马桶,包括马桶、人体表面识别系统、多光谱控制与计算成像系统、非可见光光源控制系统、多光谱光源、光学接收器以及双目摄像机,双目摄像机其中一个摄像机包括投影物镜,多光谱光源、光学接收器、投影物镜设置在马桶的正上方,人体表面识别系统分别与多光谱控制与计算成像系统和光学接收器连接,多光谱控制与计算成像系统与非可见光光源控制系统连接,多光谱控制与计算成像系统通过非可见光光源控制系统控制多光谱光源发射非可见光至马桶内的粪便表面,经过物体表面反射后进入光学接收器,光学接收器将接收的非可见光转换为电信号发送至人体表面识别系统和多光谱控制与计算成像系统,对用户进行识别并对用户目标清洁部位进行定位,投影物镜接收来自马桶内的粪便的非可见光,并通过微显示面进行拍摄,拍摄得到的马桶内的粪便图片信号发送至多光谱控制与计算成像系统,对粪便的图像进行处理分析;
    双目摄像机设置在马桶的两侧,两个摄像机以直角放置,背对用户,以捕获用户的排尿帧图像分析尿液动力学并通过多光谱分析尿液成分,双目摄像机与多光谱控制与计算成像系统连接对排尿帧图像进行处理。
  2. 根据权利要求1所述的一种健康检测的智能马桶,其特征在于:多光谱控制与计算成像系统用于根据人体表面识别系统识别结果,将光学接收器的电信号和微显示面的马桶内的粪便图片信号结合处理得到具备3D信息的图片数据,根据具备3D信息的图片数据通过IMD神经网络模型实现对体征定位识别。
  3. 根据权利要求2所述的一种健康检测的智能马桶,其特征在于使用IMD神经网络模型进行识别具体为:输入待检测图像,在神经网络模型前端有5个卷积层和2个池化层,随后连接3个感知模块和2个降低特征图尺寸模块,2个降低特征图尺寸模块设置在3个感知模块间隔中,3个感知模块和2个降低特征图尺寸模块之间设置拼接层,每个感知模块包括多个使用不同尺寸滤波器的卷积层和最大池化层,每个降低特征图尺寸模块包括多个卷积层和平均池化层,拼接层融合不同尺度的特征,连接额外的卷积层预测边界框、置信度和标签类,对结果预测执行非极大值抑制,最终输出边界框,置信度和类别并实现了3D定位和使用者识别。
  4. 根据权利要求1所述的一种健康检测的智能马桶,其特征在于通过对尿流的检测对用户的健康状况进行判断的具体过程如下:双目摄像机捕获和分析用户的排尿帧图像;捕获落入双目摄像机的广角摄像头FOV的用户的尿流;同步双目摄像机的图像帧;深度估计,由于单个摄像头缺少深度信息,使用几何计算从两个同步帧进行深度估计;流速估算和校正;测量尿流率,将测量得到的每秒钟的尿流率并绘成曲线根据尿流率曲线推算出各尿流率参数,包括最大尿流率、尿流时间、平均尿流率、最大尿流率时间、2秒钟尿流率及总尿量;检查排尿功能是否正常,并判断用户尿路有无梗阻。
  5. 根据权利要求4所述的一种健康检测的智能马桶,其特征在于所述双目摄像机捕获和分析用户的排尿帧图像具体为:广角FOV帧变形矫正;黑色和白色转换和平滑;背景减去提取帧内尿流区域。
  6. 根据权利要求4所述的一种健康检测的智能马桶,其特征在于所述深度估计采用双目立体视觉深度估计具体为:对双目摄像机进行标定,得到双目摄像机的内外参数、单应矩阵;根据标定结果对原始图像校正,校正后的两张图像位于同一平面且互相平行;对校正后的两张图像进行像素点匹配;匹配公式如下:
    Figure PCTCN2021081980-appb-100001
    其中,x,y是实际位置,x l,y l是双目摄像机左侧摄像头捕捉图像的像素点位置,x r,y r是双目摄像机右侧摄像头捕捉图像的像素点位置;根据匹配结果计算每个像素的深度,深度公式如下:
    Figure PCTCN2021081980-appb-100002
    其中,f是双目摄像机摄像头的焦距,b为双目摄像机左右摄像头之间的距离。
  7. 根据权利要求4所述的一种健康检测的智能马桶,其特征在于所述流速估算和校正具体为:根据流速变化对尿量进行错误测量;使用图像帧内的两个ROI估算流速;流速通过将深度校正后的像素值的总和除以帧偏移来执行速率校正。
  8. 根据权利要求1所述的一种健康检测的智能马桶,其特征在于:所述粪便的图像进行处理分析基于IMD神经网络模型,对马桶内检测物进行识别,识别 结果包括干净、排泄物以及厕纸,排泄物进行分类,进而归纳为便秘,正常和腹泻,通过每天自动记录和分类,按照BSFS推算出用户的健康变化。
  9. 根据权利要求1-8任意一项所述的一种健康检测的智能马桶,其特征在于:用户识别、用户目标清洁部位定位、粪便图像处理和排尿帧图像处理均采用云计算和/或边缘计算。
  10. 一种基于权利要求1-0任意一项所述的智能马桶的健康检测方法,其特征在于:分别通过对马桶内的粪便识别和尿流的检测对用户的健康状况进行判断,
    其中,通过马桶内的粪便识别对用户的健康状况进行判断的过程如下:发射非可见光至马桶内的粪便表面;接收经过物体表面反射后的非可见光转换为电信号;对用户体征定位识别;接收来自马桶内的粪便的非可见光,并进行拍摄;马桶内的粪便图片信号发送至多光谱控制与计算成像系统;根据人体表面识别系统识别结果,将光学接收器的电信号和微显示面的马桶内的粪便图片信号结合处理得到具备3D信息的图片数据;通过IMD神经网络模型实现对体征定位识别,对马桶内检测物进行识别,推算出用户的健康变化;
    其中,通过对尿流的检测对用户的健康状况进行判断的具体过程如下:捕获和分析用户的排尿帧图像;捕获用户的尿流;同步图像帧;深度估计,使用几何计算从两个同步帧进行深度估计;流速估算和校正;测量尿流率,将测量得到的每秒钟的尿流率并绘成曲线根据尿流率曲线推算出各尿流率参数,包括最大尿流率、尿流时间、平均尿流率、最大尿流率时间、2秒钟尿流率及总尿量;检查排尿功能是否正常,并判断用户尿路有无梗阻。
PCT/CN2021/081980 2021-03-03 2021-03-22 一种健康检测的智能马桶及其健康检测方法 WO2022183536A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110236533.XA CN113062421A (zh) 2021-03-03 2021-03-03 一种健康检测的智能马桶及其健康检测方法
CN202110236533.X 2021-03-03

Publications (1)

Publication Number Publication Date
WO2022183536A1 true WO2022183536A1 (zh) 2022-09-09

Family

ID=76559606

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/081980 WO2022183536A1 (zh) 2021-03-03 2021-03-22 一种健康检测的智能马桶及其健康检测方法

Country Status (2)

Country Link
CN (1) CN113062421A (zh)
WO (1) WO2022183536A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113866114B (zh) * 2021-09-30 2024-05-17 温州医科大学 一种尿液检测方法及装置、设备和计算机存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007252805A (ja) * 2006-03-24 2007-10-04 Konica Minolta Holdings Inc データ検出装置及びデータ検出方法
CN105507394A (zh) * 2016-01-30 2016-04-20 武汉大学 一种实现尿动力学检测的智能马桶及健康监测方法及配套的健康监测系统
CN110461219A (zh) * 2017-04-07 2019-11-15 托伊实验室公司 用在卫生间环境中的生物监测用的装置、方法和系统
CN111699387A (zh) * 2020-03-05 2020-09-22 厦门波耐模型设计有限责任公司 一种马桶式尿便检测机器人及其物联网系统
JP2020190181A (ja) * 2019-05-17 2020-11-26 株式会社Lixil 判定装置、判定方法、及びプログラム

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101781996B1 (ko) * 2015-11-03 2017-09-26 임대환 스마트 비데를 이용한 개인 건강 분석 방법 및 이러한 방법을 수행하는 스마트 비데
TW202046235A (zh) * 2019-06-11 2020-12-16 林芝馨 糞便狀態收集分析系統
CN111076365A (zh) * 2019-12-03 2020-04-28 珠海格力电器股份有限公司 一种自动调节空调制冷量、制热量的方法及空调
CN112102257B (zh) * 2020-08-26 2022-11-08 电子科技大学 一种基于卷积神经网络的人体粪便自动识别方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007252805A (ja) * 2006-03-24 2007-10-04 Konica Minolta Holdings Inc データ検出装置及びデータ検出方法
CN105507394A (zh) * 2016-01-30 2016-04-20 武汉大学 一种实现尿动力学检测的智能马桶及健康监测方法及配套的健康监测系统
CN110461219A (zh) * 2017-04-07 2019-11-15 托伊实验室公司 用在卫生间环境中的生物监测用的装置、方法和系统
JP2020190181A (ja) * 2019-05-17 2020-11-26 株式会社Lixil 判定装置、判定方法、及びプログラム
CN111699387A (zh) * 2020-03-05 2020-09-22 厦门波耐模型设计有限责任公司 一种马桶式尿便检测机器人及其物联网系统

Also Published As

Publication number Publication date
CN113062421A (zh) 2021-07-02

Similar Documents

Publication Publication Date Title
JP6667596B2 (ja) 物体検出システム、それを用いた自律走行車、およびその物体検出方法
CN105627932B (zh) 一种基于双目视觉的测距方法及装置
CN105933589B (zh) 一种图像处理方法及终端
US10375378B2 (en) Dual camera system for real-time depth map generation
CN102833486B (zh) 一种实时调节视频图像中人脸显示比例的方法及装置
US6370262B1 (en) Information processing apparatus and remote apparatus for object, using distance measuring apparatus
WO2014044126A1 (zh) 坐标获取装置、实时三维重建系统和方法、立体交互设备
CN107018323B (zh) 控制方法、控制装置和电子装置
CN111539311B (zh) 基于ir和rgb双摄的活体判别方法、装置及系统
JP2021531601A (ja) ニューラルネットワーク訓練、視線検出方法及び装置並びに電子機器
JP4974765B2 (ja) 画像処理方法及び装置
CN106991378A (zh) 基于深度的人脸朝向检测方法、检测装置和电子装置
TW201220253A (en) Image calculation method and apparatus
CN112423191B (zh) 一种视频通话设备和音频增益方法
CN111967288A (zh) 智能三维物体识别和定位系统和方法
WO2022116104A1 (zh) 图像处理方法、装置、设备及存储介质
WO2022183536A1 (zh) 一种健康检测的智能马桶及其健康检测方法
CN108510544A (zh) 一种基于特征聚类的光条定位方法
CN105180802A (zh) 一种物体尺寸信息识别方法和装置
CN111814659B (zh) 一种活体检测方法、和系统
WO2019052320A1 (zh) 监控方法、装置、系统、电子设备及计算机可读存储介质
CN206460480U (zh) 一种基于视频图像处理的人群密度检测装置
WO2011047508A1 (en) Embedded vision tracker and mobile guiding method for tracking sequential double color beacons array with extremely wide-angle lens
CN106101542B (zh) 一种图像处理方法及终端
CN115439875A (zh) 姿势评估装置、方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21928617

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE