CN110811564A - Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method - Google Patents

Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method Download PDF

Info

Publication number
CN110811564A
CN110811564A CN201911240545.9A CN201911240545A CN110811564A CN 110811564 A CN110811564 A CN 110811564A CN 201911240545 A CN201911240545 A CN 201911240545A CN 110811564 A CN110811564 A CN 110811564A
Authority
CN
China
Prior art keywords
image
camera
tongue
steering engine
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911240545.9A
Other languages
Chinese (zh)
Inventor
祁兴华
骆文斌
张露可
黄恩铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Chinese Medicine
Original Assignee
Nanjing University of Chinese Medicine
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Chinese Medicine filed Critical Nanjing University of Chinese Medicine
Priority to CN201911240545.9A priority Critical patent/CN110811564A/en
Publication of CN110811564A publication Critical patent/CN110811564A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4542Evaluating the mouth, e.g. the jaw
    • A61B5/4552Evaluating soft tissue within the mouth, e.g. gums or tongue
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/48Other medical applications
    • A61B5/4854Diagnosis based on concepts of traditional oriental medicine
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/30Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/30Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure
    • A61B2090/309Devices for illuminating a surgical field, the devices having an interrelation with other surgical devices or with a surgical procedure using white LEDs

Abstract

The invention discloses an intelligent face image and tongue image acquisition terminal system and a face image and tongue image acquisition method, wherein the system comprises a display screen, a human body infrared pyroelectric sensor, a distance sensor, a camera and a lamp bead group; after the sensor senses a human body signal, the microprocessor controls the display screen and the camera to be started; the camera collects images and sends the images to the microprocessor for analysis, the microprocessor controls the camera steering engine and the main body steering engine, the image capturing angle of the equipment is adjusted, and in addition, the illumination angle and intensity can be adjusted through the LED lamp set; the invention has small structure and strong portability; but rotation, inclination, the illumination intensity of intelligent regulation equipment can realize face image/tongue picture and track, and automatic adjustment arrives best collection position and best collection illumination intensity, avoids artifical inaccurate of adjusting, avoids the influence of ambient light to image authenticity, has solved the difficult problem that the image was gathered accurately. The system can also realize large-capacity image storage and viewing, and lays a foundation for artificial intelligent deep learning image identification.

Description

Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method
Technical Field
The invention relates to an intelligent diagnosis terminal, in particular to an intelligent facial image and tongue image acquisition terminal and an intelligent facial image and tongue image acquisition method.
Background
With the development of intelligent diagnosis technology, more and more intelligent diagnosis terminal systems appear, and people can conveniently carry out physique detection and auxiliary diagnosis by using the systems. Many diagnosis terminals analyze the health status of human body by collecting facial image and tongue image, and assist doctors to make decisions. However, the prior art and the apparatus have the following problems:
1. the image acquisition is greatly influenced by the ambient light intensity: the image collection is easily influenced by the ambient light intensity, the collected images have differences under different illumination intensities, and when the illumination intensity difference is large, the image characteristic difference is large. Most of the prior art devices do not provide specific illumination intensity when image acquisition is carried out. Some equipment have the light filling lamp, also are fixed intensity, can not be along with environment and intelligent regulation, reach optimum illumination intensity.
2. Face finding cannot be automatically carried out: existing equipment is fixed collection, needs the people to adjust face position by oneself, has usually to shoot the case, and during the shooting of face, the user need stretch into the head and shoot the case, and people's head is movable, but the camera is fixed motionless, and when shooting, the user does not see the image, because can't judge whether comprehensive accuracy of shooting face. Some devices use a fixed adjusting device, the head of a person is placed on the device, the head of the person is fixed well, the position of the person is adjusted by a manual adjusting device, and the person cannot see images during shooting and cannot judge whether the best shooting position exists. Even though the user can see the face image of the user when shooting by the above two technologies, the user experience is poor. Some of them are not comfortable for users with strong light and cannot open eyes to see whether the image is in the best position. Therefore, the face features of the pictures shot by the technology are difficult to extract, a third party is often needed to adjust the camera, and the user cannot automatically adjust the camera to achieve the best effect.
3. The tongue can not be taken automatically: as above, most fixed collection of existing equipment needs people automatically regulated tongue position, and it is comparatively difficult that the user oneself shoots, when shooing the tongue picture, also has the problem that needs the supplementary collection of third party. The tongue image collection and the face image collection are obviously different in that a user is in a tense state and cannot last for a long time when the user stretches out the tongue as much as possible, and the prior art need to adjust the camera by the user to obtain the optimal angle and position, so that the user feels tired and uncomfortable, and the best tongue image collection cannot be ensured. Moreover, cameras of the existing devices are often arranged on the side or above and below the screen, even if some cameras are arranged in the middle of the screen, the cameras are still fixed, when a user stretches out the eyes of the tongue to look at the screen, the position of the tongue is usually lower, and therefore more faces are shot, and the main body is not a tongue image. Therefore, users of the prior art and the equipment cannot shoot good tongue picture by themselves.
4. Not intelligent enough, the operation is inconvenient: the existing facial diagnosis tongue diagnosis equipment such as a four-diagnosis instrument needs manual operation of a professional doctor, a common user cannot complete self detection, and the operation process is not intelligent enough. Although the user can detect the mobile terminal detection device by himself, the mobile terminal detection device needs manual operation, for example, when some devices are provided with a photographing button, the motion of arms and hands easily causes body shaking, so that the facial image and the tongue image are not well photographed.
5. The volume is great, can't be portable: the existing equipment such as a four-diagnostic apparatus has overlarge volume and is only available in medical institutions, so that the existing equipment cannot be used for household. The movable detection equipment with the light supplement lamp is also large in size and is not suitable for household and portable.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to solve the problems that the prior art is unclear in image acquisition, is greatly influenced by ambient light, cannot acquire a full-face or better tongue picture and the like, and aims to provide an intelligent face-picture and tongue picture acquisition terminal.
The technical scheme is as follows: the claims are supplemented after the claims are determined to be error-free.
Compared with the prior art, the invention has the following remarkable advantages: 1. the human body approach judgment and intelligent reminding detection are beneficial to the customer service; the invention adopts the human body infrared pyroelectric sensor to judge whether a person approaches, and when the person approaches, the standby picture is played to remind the user of using the equipment for health detection. Adopt distance sensor to detect the distance that the people is close to, after the user is close to detection terminal and reaches certain distance threshold value, detection module opens, and the camera is opened to there is the voice prompt user if use this equipment. The device can be placed in the mechanisms such as a traditional Chinese medicine clinic, a health preserving hall, health management and a beauty parlor, intelligently reminds users of detection, is beneficial to attracting users, increases the interactivity of the system, and helps the mechanisms to find customers and obtain guests. 2. The face is taken intelligently, the camera faces the face, and the collected face image is more standard and is beneficial to more accurate analysis result; according to the invention, manual adjustment of a user is not needed, when the user is in a detection distance range, the camera can automatically capture a facial image of the user, the facial contour of the user is analyzed in real time through an algorithm, the motion track of the face of the user is tracked, whether the face is directly opposite to the user is judged, if the face is not directly opposite to the user, the detection terminal is automatically adjusted in a large-range adjusting mode such as rotating and inclining modes, if the face is not directly opposite to the user, the face is finally directly opposite to the user through rotating the camera, all facial images are collected, and intelligent face taking is realized. 3. The tongue is taken intelligently, the camera is opposite to the tongue, the collected face image is more standard, and the analysis result is more accurate; when tongue picture pictures are collected, a user face is tracked in real time through intelligent face taking, large-range adjustment of equipment is completed, when the user stretches out the tongue, the focus of the camera is not right opposite to the face, the tongue picture of the user is automatically captured, the tongue picture of the user is analyzed in real time through an algorithm, the situation that the user is right opposite to the tongue is achieved through fine adjustment of the position of the camera, complete tongue picture is collected, and intelligent tongue taking is achieved. 4. The intelligent light supplement is adopted, so that the optimal illumination collection image is clearer, and the analysis result is more accurate; the intelligent face-picking system comprises a group of a plurality of LED lamp beads, a photosensitive sensor and a camera, wherein when the camera finishes intelligent face picking, and a detection terminal is opposite to the face of a user, the photosensitive sensor is called at the moment, the ambient light intensity can be detected through the photosensitive sensor, the ambient light intensity is compared with a system built-in threshold value, when the ambient light intensity is lower than the system threshold value, the LED lamp is turned on, the brightness of the LED is increased, and when the ambient light intensity is higher than the system threshold value, the brightness of the LED is reduced, or even the LED lamp is. According to the invention, each LED lamp bead is controlled by a micro steering engine, so that the angle adjustment of the LED lamp bead can be realized. After the camera captures the images of the face and tongue, when the camera finds that the illumination of a certain area is insufficient, the steering engine in the corresponding area can be intelligently controlled to rotate to a certain angle, so that the LED lamp beads can be aligned to the face or the tongue to illuminate the area, and intelligent light supplement adjustment is realized. 5. Intelligent shooting, voice prompt in the whole process and liberation of double hands; when the camera finishes intelligent face taking and intelligent tongue taking, a user can be prompted to keep still for 3 seconds through voice, and the camera can automatically shoot at the moment. Without requiring the user to operate a click screen or click buttons. The whole-course image acquisition does not need manual intervention of a user, and shooting is intelligently completed. 6. The volume is less, the portability is high: compared with the existing equipment such as a four-diagnostic apparatus and other movable detection equipment, the portable detector is small in size and suitable for being used in families, drugstores, beauty parlors, clinics, health-preserving institutions and even outdoors.
Drawings
FIG. 1 is a front view of a test terminal;
FIG. 2 is a side sectional view of the test terminal;
FIG. 3 is a schematic diagram of module connections of the test terminal;
FIG. 4 is a schematic view of a human body approach judgment process;
FIG. 5 is a schematic flow chart of intelligent face fetching;
FIG. 6 is a schematic flow chart of intelligent tongue extraction;
fig. 7 is a schematic flow chart of intelligent light supplement;
fig. 8 is a schematic flow chart of smart photography.
Detailed Description
The technical solution of the present invention will be described in detail with reference to the accompanying drawings and the detailed description.
As shown in fig. 1 and 2, the intelligent facial image and tongue image acquisition terminal comprises a mirror-surface type main body shell 1, wherein the main body shell 1 is installed on a base 3 through a base support 2, a switch 4 is arranged on the base 3, a power supply module 6 is arranged inside the base 3 and comprises a battery and a charging circuit board, and preferably, a charging port is arranged on the base 3. The surface of the main body shell 1 is embedded with glass 9, and the glass 9 is preferably mirror glass and can be used as a mirror when not started. The camera 7, the display screen 8 and the sensor are arranged on the support 16 in the main body shell 1. The camera 7 is used for collecting a face image or a tongue image, the display screen is preferably a touch screen and is attached to the mirror glass, a user touches the touch screen through the touch mirror glass, and the touch screen is used for displaying an interface and performing man-machine interaction operation. The sensor is mainly used for sensing the approach of the human face and detecting the distance between the human face and the acquisition terminal. Still be equipped with main circuit board 19 in main body cover 1, it is fixed on support 16 with the screw, has integrateed microprocessor, storage module and communication module on main circuit board 19, and power module 6, camera 7, display screen 8 and sensor all are connected with microprocessor through main circuit board, realize data input, processing and output.
Specifically, the sensor includes human infrared pyroelectric sensor 11 and distance sensor 12, and the signal output port of human infrared pyroelectric sensor 11 and distance sensor 12 is connected with microprocessor's signal input port electricity respectively, and human infrared pyroelectric sensor 11 is used for detecting whether the face is close to, and distance sensor 12 is then the distance that is further used for detecting the face and gathers the terminal, and the detection data can be sent to microprocessor.
A plurality of lamp beads 10 can be further installed on the main body shell 1 to form a lamp bead group, and illumination can be provided under the application scene with insufficient illumination. In fig. 1 and 2, the lamp beads 10 are LED lamp beads, and are mounted on the inner support, and 20 groups are uniformly distributed on the inner side of the glass 9. Further, the sensor also comprises a photosensitive sensor 13 for detecting the intensity of ambient light, at the moment, a signal output port of the photosensitive sensor 13 is electrically connected with a signal input port of the microprocessor, a signal output port of the microprocessor is electrically connected with the LED lamp beads, the photosensitive sensor sends the acquired ambient light intensity signal to the microprocessor, and the microprocessor correspondingly controls the LED lamp beads to be turned on or turned off.
Further, a main body horizontal steering engine 5 and a main body vertical steering engine 18 are installed between the main body shell 1 and the base support 2, each LED lamp bead is connected with an LED lamp bead steering engine 14, each lamp bead steering engine comprises a lamp bead horizontal steering engine and a lamp bead vertical steering engine, the camera 7 is connected with a camera steering engine 15, and each camera steering engine comprises a camera horizontal steering engine and a camera vertical steering engine. LED lamp pearl steering wheel 14, camera steering wheel 15, main part horizontal steering wheel 5 and main part vertical steering wheel 18 all are connected with microprocessor electricity, and rotation is realized to main part shell 1, LED lamp pearl, 7 accessible steering wheels of camera.
In addition, the equipment also comprises a language playing module, the embodiment adopts a loudspeaker which is connected with the microprocessor through a circuit board, and correspondingly, the back of the main body shell 1 is provided with a sound outlet 17.
As shown in fig. 3, the photosensitive sensor, the human body infrared pyroelectric sensor, the distance sensor, the camera steering gear, the main body horizontal steering gear, the main body vertical steering gear, the LED lamp bead, the display screen and the loudspeaker are respectively electrically connected with the microprocessor, wherein the signal output port of the photosensitive sensor, the human body infrared pyroelectric sensor and the distance sensor is connected with the signal input port of the microprocessor, and the signal output port of the microprocessor is connected with the camera, the camera steering gear, the main body horizontal steering gear, the main body vertical steering gear, the LED lamp bead, the display screen and the loudspeaker.
The microprocessor, the storage module and the communication module are integrated on the main circuit board, the acquisition terminal is in communication connection with the server through the communication module to achieve data transmission, acquired images are transmitted to the server, and the server feeds back results to the acquisition terminal after analysis.
Based on the intelligent face image and tongue image acquisition terminal, the invention also provides an intelligent face image and tongue image acquisition method, which has the following working principle:
algorithm 1: human body approach judgment
The invention adopts the human body infrared pyroelectric sensor to judge whether a person approaches, and when the person approaches, the standby picture is played to remind the user of using the equipment for health detection. Adopt distance sensor to detect the distance that the people is close to, after the user is close to detection terminal and reaches certain distance threshold value, detection module opens, and the camera is opened to there is the voice prompt user if use this equipment.
The specific process is as follows:
(1) the human body infrared pyroelectric sensor detects human body infrared signals within 10 meters once every 500ms, and when a human body enters a detection range, the sensor transmits the detected signals to the microprocessor, and the human body infrared pyroelectric sensor judges that a person is within 10 meters.
(2) The microprocessor sends signals to the display screen, the display screen is opened, standby pictures are played, voice is played, and a user is reminded that the equipment can be used for detection.
(3) The microprocessor simultaneously signals the distance sensors, the distance sensors are started, the distance sensors detect the distance between the human body once every 500ms, and the distance data are sent to the microprocessor. When the detection distance is less than 0.5m, the microprocessor sends a signal to the display screen, displays a detection interface, plays voice, reminds a user of starting detection and starts to count down for 5 seconds.
(4) After the countdown is finished, the microprocessor sends a signal to the camera, the camera is opened, and the prompt voice is played to start detection and please keep still.
(5) This process is completed and the next process is entered.
And 2, algorithm: intelligent face taking device
According to the invention, manual adjustment of a user is not needed, when the user is in a detection distance range, the camera can automatically capture a facial image of the user, the facial contour of the user is analyzed in real time through an algorithm, the motion track of the face of the user is tracked, whether the face is directly opposite to the user is judged, if the face is not directly opposite to the user, the detection terminal is automatically adjusted in a large-range adjusting mode such as rotating and inclining modes, if the face is not directly opposite to the user, the face is finally directly opposite to the user through rotating the camera, all facial images are collected, and intelligent face taking is realized.
The specific process is as follows:
(1) after the face image is played and prompt voice is collected, a face image p starts to be collected, and after the image is preprocessed, face contour characteristics T are extracted.
(2) Face contour feature library [ T ] built-in with system1,T2,T3,T4,T5……Tn]And matching, namely obtaining the offset direction z and the offset degree y of the acquired image according to one feature in the corresponding feature library. Example (c): if T ═ T1Then z is z1 and y is y 1.
(3) And (3) decision adjustment: when y is equal to 0, namely the camera is opposite to the face, the adjustment is not needed, and the user can directly take a picture of the face. When y is more than 0 and less than 10, the offset exists, but the offset degree is small and is less than 10 degrees, only the camera steering engine needs to be adjusted, and the camera steering engine is small and is not suitable for large-amplitude adjustment. When y > is 10, namely the offset angle is large, the main body steering engine needs to be adjusted. When z is in a down quadrant, the face image is shown to be shifted downwards, the camera vertical steering engine or the main body vertical steering engine is started, and y degrees are adjusted upwards; when z is in the up quadrant, the face image is shown to be shifted upwards, the camera is started to be perpendicular to the steering engine, and the y degree is adjusted downwards; when z is in a left quadrant, the human face image is shifted to the left, the camera horizontal steering engine is started, and the degree of y is adjusted to the right; when z is in the right quadrant, the human face image is shown to shift to the right, the camera horizontal steering engine is started, and the y degree is adjusted to the left.
(4) And after the steering engine is adjusted, acquiring the face image again, and re-matching until y is 0, wherein the camera is just opposite to the face.
(5) And taking the face picture, storing the face picture locally and sending the face picture to a server.
(6) And finishing face acquisition and entering the next process.
Algorithm 3: intelligent tongue taking device
When gathering tongue picture, get the face through the intelligence of last link, equipment has been adjusted to ensure that the camera aims at user's face, consequently, when the user stretches out the tongue, the camera can not just to the tongue again but just to the face this moment, consequently, automatic user tongue image of catching, through algorithm real-time analysis user tongue picture, through the position of main part steering wheel adjustment main part position and fine setting camera, realize that the camera is just to the tongue, gather positive complete tongue picture, realize that intelligence is got the tongue.
The specific process is as follows:
(1) and after playing tongue picture acquisition prompt voice, starting to acquire a tongue image q, and extracting tongue outline characteristics S after preprocessing the image.
(2) Tongue picture matching: tongue outline feature library [ S ] built in with system1,S2,S3,S4,S5……Sn]And matching, namely obtaining the offset direction z and the offset degree y of the acquired image according to one feature in the corresponding feature library. Example (c): if S ═ S1Then z is z1 and y is y 1.
(3) And (3) decision adjustment: when y is equal to 0, namely the camera is over against the tongue, the adjustment is not needed, and the shooting of the tongue picture is directly carried out. When y is more than 0 and less than 5, the offset exists, but the offset degree is small and is less than 5 degrees, only the camera steering engine needs to be adjusted, and the camera steering engine is small and is not suitable for large-amplitude adjustment. When y > is 5, namely the offset angle is large, the main body steering engine needs to be adjusted. When z is in a down quadrant, the tongue image is shown to be shifted downwards, the camera vertical steering engine or the main body vertical steering engine is started, and y degrees are adjusted upwards; when z is in the up quadrant, the tongue image is shown to be shifted upwards, the camera is started perpendicular to the steering engine, and the y degree is adjusted downwards; when z is in a left quadrant, the tongue image is represented to shift towards the left, the camera horizontal steering engine is started, and the degree of y is adjusted towards the right; when z is in the right quadrant, it shows that the tongue image shifts to the right, and camera horizontal steering wheel opens, turns left and adjusts y degree.
(4) After the steering wheel was adjusted, gather the tongue image again, match again, until y is 0, it just to the tongue to show the camera, can shoot positive complete tongue image.
(5) And shooting the tongue picture, storing the tongue picture locally and sending the tongue picture to a server.
(6) The tongue picture is collected.
4. Intelligent light supplement
The LED lamp comprises a group of a plurality of LED lamp beads, steering gears (20 in the embodiment), a photosensitive sensor and a camera, wherein the photosensitive sensor is used for judging whether the intensity of ambient light exists or not, the intensity of the ambient light can be detected, compared with a threshold value arranged in a system, when the intensity of the ambient light is lower than the threshold value of the system, the LED lamp is turned on, the brightness of the LED lamp is increased, and when the intensity of the LED lamp is higher than the threshold value of the system, the brightness of the LED lamp is reduced, or even the LED lamp is.
When the camera finishes intelligent face or tongue taking, the camera is over against the face or tongue of a user, the image is captured and illuminance analysis is carried out, the current image illuminance characteristic is extracted and matched with an illuminance characteristic library built in the system, the current illuminance is obtained, the brightness of the LED lamp is increased if the whole is lower than the standard illuminance, the brightness of the LED lamp is weakened if the whole is higher than the standard illuminance, and the LED lamp at the corresponding position is controlled to be increased or weakened if the local position is lower than or higher than the standard illuminance. According to the invention, each LED lamp bead is controlled by a micro steering engine, so that the illumination angle of the LED lamp bead can be adjusted. If the local position shifts, then start the LED lamp steering wheel, rotatory to suitable angle to realize that LED lamp pearl aims at facial or tongue region and shines, with all positions illumination intensity unanimity, and be in optimum, realize that intelligent light filling is adjusted.
The specific process is as follows:
(1) and when the algorithm 2 and the algorithm 3 enter the intelligent light supplement, the algorithm is started.
(2) The photosensitive sensor measures the ambient light intensity L once every 50ms and continuously measures 500 groups of L1,L2,L3……L500]And averaging to obtain the environment average illumination intensity L.
(3) Matching ambient light: the ratio of the average ambient illumination intensity' L to the ambient illumination intensity of the system interior, the ambient illumination intensity of the system interior is: low illumination La, medium illumination Lb and high illumination Lc, wherein when' L < La, the ambient light illumination is too low, and all the LED lamps are turned on; when L is greater than Lc, indicating that the ambient light is too strong, and turning off all the LED lamps; when La < L < Lb, increasing the brightness of the whole LED lamp; when Lb < L < Lc, reducing the brightness of all the LED lamps; when L is Lb, it indicates that the ambient light illuminance is close to the standard, and the next step may be performed.
(4) The camera starts to collect a face image I, and after image preprocessing, illumination characteristics Q of the face or tongue image are extracted.
(5) Image lighting integral piecePreparing: illuminance characteristic library [ Q ] built in system1,Q2,Q3,Q4,Q5……Qn]Matching, namely acquiring the illuminance R of the acquired image and the standard value R of the illuminance of the image built in the system according to one feature in the corresponding feature libraryeIn contrast, if R>ReThen reducing the brightness of all LEDs; if R is<ReIncreasing the brightness of all LEDs; if R ═ ReIf the total illuminance of the currently acquired image is equivalent to the standard illuminance of the image built in the system, the next step can be performed. Example (c): if Q is Q1, then R is R1.
(6) Local matching of image illumination: from the previous step, after matching, the weak illumination position coordinate P (X) is also obtainedn,Yn) And degree of offset W, example: if Q is Q1, then P (X)n,Yn)=P(X1,Y1) W is W1. And starting the steering engine of the corresponding LED lamp bead according to the coordinate position, and reversely adjusting the offset W degree, so that the LED lamp bead is aligned to the position to irradiate.
(7) And finishing intelligent light supplement.
5. Intelligent shooting
When the camera finishes intelligent face taking and intelligent tongue taking, a user can be prompted to keep still for 3 seconds through voice, and the camera can automatically shoot at the moment. Without requiring the user to operate a click screen or click buttons. The whole-course image acquisition does not need manual intervention of a user, and shooting is intelligently completed.
The specific process is as follows:
(1) starting the algorithm when the face picture and the tongue picture are shot in the algorithms 2 and 3;
(2) play voice prompt "start after 3 seconds of image acquisition";
(3) shooting by a camera after 3 seconds;
(4) closing the camera;
(5) preprocessing an image;
(6) after local storage, sending the data to a server;
(7) and playing voice prompt that image acquisition is finished.
The working process is as follows:
after the detection equipment is started, the screen is normally in a starting state but is closed, and the human body infrared pyroelectric sensor can automatically judge whether a person approaches. When a person approaches, the standby meeting is automatically played to remind the user that the equipment can be used for health detection. The distance sensor of equipment detects the distance that the people is close to, after the user is close to detection terminal and arrives certain distance threshold value, detection module opens, the camera is opened, and there is the voice prompt user if use this equipment, the camera can catch user's facial image automatically, through algorithm real-time analysis user face profile, track user face's movement track, judge whether face is facing, if find not face, need adjust on a large scale then through adjusting main part level and vertical steering wheel automatically regulated detection terminal, need the adjustment of minizone then realize rotatory camera through adjusting the camera steering wheel, finally realize facing face, whole collection face's image, realize that intelligence is got the face, shoot face's image, send the server after the localization storage. After the face image is shot, the tongue image collecting process is started, the voice prompt user stretches out the tongue, the camera cannot be over against the tongue but over against the face at the moment, therefore, the tongue image of the user is automatically captured, the tongue image of the user is analyzed in real time through an algorithm, the position of the main body is adjusted through the main body steering engine and the position of the camera is finely adjusted, the camera is over against the tongue, the positive and complete tongue image is collected, intelligent tongue taking is achieved, the tongue image is shot, and the tongue image is sent to the server after being stored locally. Before shooting the face and tongue images, the intelligent light supplementing process is also provided, when the camera is over against the face or the tongue of a user, the image is captured and illuminance analysis is carried out, the current image illuminance characteristic is extracted and is matched with an illuminance characteristic library built in the system, the current illuminance is obtained, the brightness of the LED lamp is increased if the whole image illuminance is lower than the standard illuminance, the brightness of the LED lamp is weakened if the whole image illuminance is higher than the standard illuminance, and the LED lamp at the corresponding position is controlled to be increased or weakened if the local position is lower than or higher than the standard illuminance. According to the invention, each LED lamp bead is controlled by a micro steering engine, so that the illumination angle of the LED lamp bead can be adjusted. If the local position shifts, then start the LED lamp steering wheel, rotatory to suitable angle to realize that LED lamp pearl aims at facial or tongue region and shines, with all positions illumination intensity unanimity, and be in optimum, realize that intelligent light filling is adjusted. In the image shooting link, when the camera finishes intelligent face taking and intelligent tongue taking, the user can be prompted to keep still for 3 seconds through voice, and the camera can shoot automatically at the moment. Without requiring the user to operate a click screen or click buttons. The whole-course image acquisition does not need manual intervention of a user, and shooting is intelligently completed.
After the user image is collected and uploaded to the server, the server sends the detection result to the detection device through analysis, and the user can check the detection result on the display screen. When the human body infrared pyroelectric sensor detects that the user is out of range, the display screen is automatically closed.

Claims (10)

1. An intelligent facial image and tongue image acquisition terminal system is characterized by comprising:
the main body shell is provided with glass on the surface, a support is arranged in the main body shell, a display screen, a sensor, a camera and a camera steering engine are mounted on the support, the camera is connected with the camera steering engine, the display screen, the sensor, the camera and the camera steering engine are all located on the inner side of the glass, the main body shell is mounted on a base through a base support, and a main body steering engine for controlling the rotation of the main body shell is mounted between the main body shell and the base support; the sensor comprises a human body infrared pyroelectric sensor and a distance sensor, the main body steering engine comprises a main body horizontal steering engine and a main body vertical steering engine, and the camera steering engine comprises a camera vertical steering engine and a camera horizontal steering engine;
the microprocessor is electrically connected with the display screen, the sensor, the camera steering engine and the main body steering engine; after receiving the human body signal sent by the sensor, the microprocessor controls the display screen and the camera to be started, the camera collects a face image/tongue image and sends the image to the microprocessor, and the microprocessor analyzes the image and adjusts an image capturing angle through a camera steering engine and/or a main body steering engine;
the voice playing module is electrically connected with the microprocessor;
and the power supply module is electrically connected with the microprocessor.
2. The intelligent facial image and tongue image acquisition terminal system according to claim 1, characterized in that: the lamp bead group is arranged on the support and consists of a plurality of lamp beads, and the lamp beads are positioned on the inner side of the glass.
3. The intelligent facial image and tongue image acquisition terminal system according to claim 2, characterized in that: the lamp beads are electrically connected with the microprocessor.
4. The intelligent facial image and tongue image acquisition terminal system according to claim 3, wherein: the support is also provided with a lamp bead steering engine, the lamp bead is connected with the lamp bead steering engine, the lamp bead steering engine is positioned on the inner side of the glass, and the lamp bead steering engine comprises a lamp bead vertical steering engine and a lamp bead horizontal steering engine; after the microprocessor analyzes the image, the illumination angle of the lamp beads is adjusted through the lamp bead steering engine.
5. The intelligent facial image and tongue image acquisition terminal system as claimed in claim 3 or 4, wherein: the sensor comprises a photosensitive sensor which is electrically connected with the microprocessor.
6. The intelligent facial image and tongue image acquisition terminal system according to claim 1, characterized in that: the system comprises a communication module, a microprocessor and a remote server, wherein the communication module is electrically connected with the microprocessor and is used for being in communication connection with the remote server.
7. A facial image and tongue image acquisition method based on the intelligent facial image and tongue image acquisition terminal system as claimed in any one of claims 1 to 6, comprising:
firstly, human body approach judgment:
the human body infrared pyroelectric sensor detects a human body infrared signal and transmits the human body infrared signal to the microprocessor, the microprocessor judges whether a person approaches, and when the person approaches, the microprocessor controls the display screen to be started; meanwhile, the microprocessor controls the distance sensor to be started, the distance sensor detects the distance between a user and the equipment, and when the distance reaches a distance threshold value set by the system, the camera is started;
secondly, intelligently taking faces/tongues:
the camera collects the face image/tongue image of a user, analyzes the face/tongue contour of the user in real time, tracks the face movement track of the user, judges whether the face is directly opposite to the user, and automatically adjusts the angle of the camera or a main body shell of the collection terminal to enable the camera or the main body shell to be directly opposite to the face/tongue if the face is not directly opposite to the user; the method comprises the following specific steps:
(1) the camera collects an image p, and after the image is preprocessed, facial/tongue contour features are extracted;
(2) face/tongue matching: matching with a face/tongue contour feature library built in the system, and obtaining the offset direction z and the offset degree y of the acquired image according to one feature in the corresponding feature library;
(3) and (3) decision adjustment:
when y is equal to 0, namely the camera is over against the face/tongue, the face/tongue picture is directly shot without adjustment; when y is not equal to 0, comparing the offset degree y with an offset degree threshold value set by the system, and adjusting an image capturing angle by adjusting a camera steering engine or a main body steering engine;
when z is in a down quadrant, the human face/tongue image is shown to be shifted downwards, the camera vertical steering engine or the main body vertical steering engine is started, and the y degree is adjusted upwards; when z is in the up quadrant, the human face/tongue image is shifted upwards, the camera is started perpendicular to the steering engine, and the y degree is adjusted downwards; when z is in a left quadrant, the human face/tongue image is shifted to the left, the camera horizontal steering engine is started, and the degree is adjusted to the right by y; when z is in the right quadrant, the human face/tongue image is shown to be shifted to the right, the camera horizontal steering engine is started, and the y degree is adjusted to the left;
(4) after the steering engine is adjusted, acquiring the face/tongue image again, and matching again until y is equal to 0, wherein the camera is opposite to the face/tongue;
(5) finishing the collection of the facial image/tongue image;
thirdly, intelligently shooting:
when the camera finishes intelligent face/tongue taking, the voice prompts the user to keep still, the camera automatically shoots at the moment, shooting is finished, and the camera is closed.
8. The method for collecting facial image and tongue image as claimed in claim 7, wherein: the photosensitive sensor detects the intensity of ambient light, the microprocessor compares the intensity of the ambient light with a light intensity threshold value arranged in the system, and the microprocessor carries out intelligent light supplement by turning on/off the lamp beads or increasing/reducing the brightness of the lamp beads.
9. The method for collecting facial and tongue images as claimed in claim 7 or 8, wherein: when the camera finishes face taking/tongue taking, the microprocessor analyzes the illuminance of the image, extracts the current image illuminance characteristic, matches the current image illuminance characteristic with an illuminance characteristic library built in the system to obtain the current illuminance, compares the current illuminance with the standard illuminance, and intelligently supplements light by turning on/off the lamp beads or increasing/reducing the brightness of the lamp beads.
10. The method for collecting facial image and tongue image as claimed in claim 9, wherein: after illuminance analysis is carried out, if the local position illuminance deviates, the lamp bead steering engine at the corresponding position is adjusted, so that the lamp bead at the position rotates to a suitable irradiation angle, and intelligent light supplement is adjusted.
CN201911240545.9A 2019-12-06 2019-12-06 Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method Pending CN110811564A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911240545.9A CN110811564A (en) 2019-12-06 2019-12-06 Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911240545.9A CN110811564A (en) 2019-12-06 2019-12-06 Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method

Publications (1)

Publication Number Publication Date
CN110811564A true CN110811564A (en) 2020-02-21

Family

ID=69544722

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911240545.9A Pending CN110811564A (en) 2019-12-06 2019-12-06 Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method

Country Status (1)

Country Link
CN (1) CN110811564A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040135A (en) * 2020-09-22 2020-12-04 深圳鼎识科技股份有限公司 Method for automatically snapping human face by human face camera
CN113317659A (en) * 2021-05-18 2021-08-31 深圳市行识未来科技有限公司 A show shelf for planar design three-dimensional model
CN113361513A (en) * 2021-06-07 2021-09-07 博奥生物集团有限公司 Mobile terminal tongue picture acquisition method, device and equipment
CN114339024A (en) * 2020-10-10 2022-04-12 上海汉时信息科技有限公司 Method for improving imaging reflection of camera, server and shooting equipment
CN114495367A (en) * 2021-12-28 2022-05-13 成都美透科技有限公司 Block chain based medical and American user service prompting method, device and system
CN115079487A (en) * 2022-06-15 2022-09-20 深圳中易健康科技有限公司 Self-adaptive light supplementing device and method applied to tongue image shooting
CN115460346A (en) * 2022-08-17 2022-12-09 山东浪潮超高清智能科技有限公司 Data acquisition device capable of automatically adjusting angle

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112040135A (en) * 2020-09-22 2020-12-04 深圳鼎识科技股份有限公司 Method for automatically snapping human face by human face camera
CN114339024A (en) * 2020-10-10 2022-04-12 上海汉时信息科技有限公司 Method for improving imaging reflection of camera, server and shooting equipment
CN113317659A (en) * 2021-05-18 2021-08-31 深圳市行识未来科技有限公司 A show shelf for planar design three-dimensional model
CN113361513A (en) * 2021-06-07 2021-09-07 博奥生物集团有限公司 Mobile terminal tongue picture acquisition method, device and equipment
CN114495367A (en) * 2021-12-28 2022-05-13 成都美透科技有限公司 Block chain based medical and American user service prompting method, device and system
CN115079487A (en) * 2022-06-15 2022-09-20 深圳中易健康科技有限公司 Self-adaptive light supplementing device and method applied to tongue image shooting
CN115079487B (en) * 2022-06-15 2024-03-29 深圳中易健康科技有限公司 Self-adaptive light supplementing device and method applied to tongue image shooting
CN115460346A (en) * 2022-08-17 2022-12-09 山东浪潮超高清智能科技有限公司 Data acquisition device capable of automatically adjusting angle
CN115460346B (en) * 2022-08-17 2024-01-23 山东浪潮超高清智能科技有限公司 Automatic angle-adjusting data acquisition device

Similar Documents

Publication Publication Date Title
CN110811564A (en) Intelligent face image and tongue image acquisition terminal system and face image and tongue image acquisition method
CN103729981B (en) A kind of child sitting gesture monitoring intelligent terminal
CN105011903B (en) A kind of Intelligent health diagnosis system
RU2663492C2 (en) Electronic ophthalmological lens with sleep tracking
US5802494A (en) Patient monitoring system
US20170363885A1 (en) Image alignment systems and methods
CN104898406B (en) Electronic equipment and collection control method
CN106998499A (en) It is capable of the intelligent TV set and its control system and control method of intelligent standby
CN109271875B (en) A kind of fatigue detection method based on supercilium and eye key point information
CN107169309B (en) Visual field detection method, system and detection device based on wear-type detection device
CN106973326A (en) It is capable of the intelligent TV set and its control system and control method of intelligent standby
CN106952252B (en) Skin imaging system based on three spectrums
WO2008141535A1 (en) Double eyes&#39; images acquisition apparatus using an active vision feedback mode for iris recognition
CN107773225A (en) Pulse wave measuring apparatus, pulse wave measuring method, program and recording medium
CN108108693B (en) Intelligent identification monitoring device and recognition methods based on 3D high definition VR panorama
CN107273071A (en) Electronic installation, screen adjustment system and method
CN106652228A (en) Self-service type snapshooting equipment and method
CN112329652A (en) Sliding type self-adaptive finger vein recognition device and method
CN110619956A (en) Traditional chinese medical science intelligent robot
CN111734974B (en) Intelligent desk lamp with sitting posture reminding function
CN110414445A (en) Light source adjusting method, device and electronic equipment for recognition of face
CN206258976U (en) A kind of self-service Snapshot Devices
CN111182204A (en) Shooting method based on wearable device and wearable device
CN216697347U (en) Face vein collection device and equipment
Park et al. Implementation of an eye gaze tracking system for the disabled people

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination