CN110585592B - Personalized electronic acupuncture device and generation method and generation device thereof - Google Patents

Personalized electronic acupuncture device and generation method and generation device thereof Download PDF

Info

Publication number
CN110585592B
CN110585592B CN201910704753.3A CN201910704753A CN110585592B CN 110585592 B CN110585592 B CN 110585592B CN 201910704753 A CN201910704753 A CN 201910704753A CN 110585592 B CN110585592 B CN 110585592B
Authority
CN
China
Prior art keywords
image
eye
acupuncture
eyebrow
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910704753.3A
Other languages
Chinese (zh)
Other versions
CN110585592A (en
Inventor
毕宏生
吴建峰
毛力
宋继科
郭俊国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan Tongxing Intelligent Technology Co ltd
Original Assignee
Jinan Tongxing Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan Tongxing Intelligent Technology Co ltd filed Critical Jinan Tongxing Intelligent Technology Co ltd
Priority to CN201910704753.3A priority Critical patent/CN110585592B/en
Publication of CN110585592A publication Critical patent/CN110585592A/en
Application granted granted Critical
Publication of CN110585592B publication Critical patent/CN110585592B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H39/00Devices for locating or stimulating specific reflex points of the body for physical therapy, e.g. acupuncture
    • A61H39/02Devices for locating such points
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36014External stimulators, e.g. with patch electrodes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61NELECTROTHERAPY; MAGNETOTHERAPY; RADIATION THERAPY; ULTRASOUND THERAPY
    • A61N1/00Electrotherapy; Circuits therefor
    • A61N1/18Applying electric currents by contact electrodes
    • A61N1/32Applying electric currents by contact electrodes alternating or intermittent currents
    • A61N1/36Applying electric currents by contact electrodes alternating or intermittent currents for stimulation
    • A61N1/36046Applying electric currents by contact electrodes alternating or intermittent currents for stimulation of the eye

Abstract

The application discloses a generation method of a personalized electronic acupuncture device, which comprises the steps of obtaining a face model of a user through three-dimensional scanning; carrying out image recognition on the face model to obtain position information of a plurality of corresponding eye acupuncture points on the face model; generating a personalized three-dimensional eyeshade model matched with the face model; and printing the three-dimensional eyeshade model through three-dimensional printing equipment to obtain the personalized acupuncture eyeshade. When the patient treats eye diseases such as myopia and amblyopia, the acupuncture point can be automatically positioned through the acupuncture eyeshade without positioning the acupuncture point by a professional, so that the convenience and the rapidness are realized, and the positioning precision is high. The acupuncture eye patch is different according to different facial structures of each patient, so that the patient feels more comfortable when wearing the eye patch, personalized and accurate treatment is realized through acupoint personalized positioning and intelligent regulation and control of stimulation parameters, the eye patch is easy to copy quickly and popularize and apply in a large area, and the problem of insufficient myopia prevention and control medical resources is solved.

Description

Personalized electronic acupuncture device and generation method and generation device thereof
Technical Field
The application relates to the field of electronic acupuncture therapeutic instruments, in particular to a personalized electronic acupuncture device and a generation method and a generation device thereof.
Background
There are many methods for treating vision in traditional Chinese medicine, for example, acupuncture points can be used for treating myopia and amblyopia. The traditional acupuncture is to insert a needle (usually a filiform needle) into a patient's body at a certain angle and to stimulate a specific part of the body by using acupuncture techniques such as twirling and lifting insertion, thereby treating diseases.
However, the conventional eye acupuncture requires professional personnel to operate, is not suitable for wide popularization, and the eye electronic acupuncture instrument well changes the situation. The electronic acupuncture instrument does not need a silver needle, but stimulates the acupuncture points through electronic pulses.
But the existing electronic acupuncture instrument has many defects: when a patient conducts electronic acupuncture, due to the fact that acupuncture points of different people are slightly different, if a user uses the electronic acupuncture instrument without assistance of professional people, the acupuncture points are often difficult to align, and an expected treatment effect cannot be achieved.
Disclosure of Invention
In order to solve the above problems, the present application provides a method of generating a personalized electronic acupuncture device, the method comprising: three-dimensionally scanning the face of a user to obtain a user face model; determining position information of a plurality of eye acupuncture points corresponding to the face model according to the preset position relation of the eye acupuncture points on the face of the person; generating a three-dimensional eyeshade model matched with the face model according to the face model of the user and the position information of a plurality of corresponding eye acupuncture points on the face model; wherein, the positions of the three-dimensional eyeshade model corresponding to the eye acupuncture points are provided with mounting holes for mounting conductive column heads; the conductive column head is connected with an electronic acupuncture controller in an electronic acupuncture device so as to transmit electronic pulses generated by the electronic acupuncture device to the conductive column head; printing the three-dimensional eyeshade model through three-dimensional printing equipment to obtain an acupuncture eyeshade; wherein the acupuncture eye shield is part of an electronic acupuncture device.
In one example, determining, according to a preset position relationship of a plurality of eye acupuncture points on a human face, position information of the plurality of eye acupuncture points corresponding to the face model specifically includes: determining an eyebrow image of the user by adopting a coarse positioning mode for the face model; wherein the eyebrow image is an image containing eyebrows and eye areas of the user; carrying out gray level processing on the eyebrow and eye images; constructing a transverse operator according to the size of the eyebrow image subjected to the gray processing, wherein the transverse operator is an odd number; convolving the transverse operator with the eyebrow image subjected to the gray processing to obtain a transverse gray variation curve of the eyebrow image; taking the maximum value of the transverse gray scale change curve of the eyebrow eye image as the transverse center position of the eye area; at the transverse center position of the eye area, taking two positions which are respectively upward and downward along the longitudinal direction until a preset proportion is reached as an upper boundary and a lower boundary of the eye area; intercepting the eyebrow image according to the upper boundary and the lower boundary of the eye area to obtain a transverse position image of the eye area; calculating a longitudinal gray scale integration function between the upper boundary and the lower boundary for each pixel in the left half image or the right half image of the transverse position image to obtain a longitudinal gray scale integration function image; in all the wave crests and wave troughs of the longitudinal gray scale integral function image, taking the positions corresponding to the wave crests or wave troughs on the leftmost side and the rightmost side in the transverse position image as the left boundary and the right boundary of the eye area in the longitudinal direction; intercepting the transverse position image according to the left boundary and the right boundary of the eye area, and determining the eye area on the face model; and determining the position information of the corresponding eye acupuncture points on the face model according to the determined eye areas and the corresponding position relationships of the eye acupuncture points on the human face and the eye areas.
In one example, the coarse positioning method includes: determining the eyebrow images of the user in the face images corresponding to the face model through the trained positioning model; and when the positioning model is trained, the facial images of a plurality of users are input, and the eyebrow images of the users are output.
In one example, after determining the eye region on the face model, the method further comprises: roughly positioning the edge of the image corresponding to the eye area to obtain edge pixels, and representing the image representing the edge in each edge pixel by a straight line; wherein, l is the distance between the center point of the coordinate and the edge, and is the included angle between the gradient direction of the edge and the x axis; the two-dimensional stepped sub-pixel position is taken as the eye edge position, where a is the gray value inside the edge and b is the edge height.
In one example, after determining the eye region on the face model, the method further comprises: taking the part of the eyebrow image without the eye area as an eyebrow candidate image; selecting a plurality of gray values to be selected according to the gray histogram of the image to be selected, and performing binarization processing on the image to be selected according to the gray values to be selected to obtain a plurality of images to be selected after binarization processing; during binarization processing, an area formed by pixels with gray values smaller than a gray value to be selected is called an effective area; carrying out image fusion on the plurality of binarized eyebrow images to be selected to obtain fused images; when the first effective region is completely contained in the second effective region during image fusion, the second effective region is called as containing the first effective region; and the effective region contained in the fused image is called as a fusion effective region; for each fusion effective region in the fused image, if the number of the effective regions contained in the fusion effective region is greater than a preset threshold value, the fusion effective region is called a candidate effective region; determining a candidate effective region where the eyebrow is located according to the information entropy of each candidate effective region in the fused image, and calling the candidate effective region where the eyebrow is located as an eyebrow region; according to the determined eye regions and the corresponding position relations of the eye acupuncture points on the human face and the eye regions, determining the position information of the corresponding eye acupuncture points on the face model, specifically comprising: and determining the position information of the corresponding eye acupuncture points on the face model according to the determined eye area, the determined eyebrow area and the corresponding position relationship between the eye acupuncture points and the eyebrow area on the human face.
In one example, after determining the eyebrow image of the user by using the rough positioning mode for the face model, the method further comprises the following steps: according to
Figure GDA0003894879880000031
And
Figure GDA0003894879880000032
Figure GDA0003894879880000033
when S is<When 7, judging the eyebrow and eye images to be fuzzy images, wherein g x (i, j) and g y (i, j) are gradient diagrams of the eyebrow eye image f in the x and y directions, respectively, m, n are the number of lines and columns of the eyebrow eye image f in the x and y directions, respectively, and G num Is the sum of the number of non-zero gradient values of the x-direction gradient map and the y-direction gradient map; according to
Figure GDA0003894879880000034
Figure GDA0003894879880000041
And
Figure GDA0003894879880000042
determining a foreground blurred image in the blurred image, wherein q (x, y) is the foreground blurred image, c is a third preset value, d is a fourth preset value, and N is h Is the total number of pixels in the neighborhood of the pixel with (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with (x, y) in the blurred image, I (s, t) is the gray value of the pixel with (x, y) in the blurred image, and m (x, y) is the mean value of I (x, y); and processing the determined foreground blurred image by adopting Gaussian filtering to obtain a foreground clear image which is used as the eyebrow image after image deblurring.
In one example, a viewing hole is arranged on the three-dimensional eyeshade model at a position corresponding to the eyes of the user; the electronic acupuncture device further comprises: the display device is connected with the electronic acupuncture controller, and the position of the display device corresponds to that of the observation hole; the eye movement tracking device is connected with the electronic acupuncture controller and used for acquiring an eye image when the user watches a preset picture when the display device displays the preset picture, and sending the eye image to the electronic acupuncture controller, so that the electronic acupuncture controller obtains a watching track of the user on the display device according to the eye image, and determines whether the user is a vision-impaired user based on the preset picture and the watching track; the display device is also used for receiving a playing instruction sent by the electronic acupuncture controller when the electronic acupuncture controller determines that the vision disorder type of the user is amblyopia, and playing a multimedia file according to the playing instruction; the picture corresponding to the multimedia file comprises a lower layer pattern and an upper layer bar; the upper layer bar grating is a bar grating pattern which is alternate in black and white and rotates at a preset speed.
In one example, the electronic acupuncture controller obtains a gaze track of the user on the display device according to the eye image, specifically comprising: the electronic acupuncture controller segments the eye image according to a preset threshold value and acquires an eyeball area corresponding to the eye image according to segmented segmentation areas and mutual wrapping characteristics among the segmentation areas; selecting a point with the lowest gray value as a seed point in the eyeball area, and obtaining a pupil area through a preset growth threshold value, a boundary condition and an area growth algorithm; determining the center position of the pupil according to the pupil area; determining pupil position data corresponding to the eye image based on the pupil center position; the pupil position data is the relative offset between the pupil center position and the cornea reflecting point position in the eye image; determining a gaze trajectory of the user on the display device based on the pupil location data.
In one example, determining the user's gaze trajectory based on the pupil location data specifically includes: carrying out binarization processing on the eye image according to a preset threshold value to obtain a binarization image containing reflective points; calculating the areas of all the reflective points in the binary image; taking the reflecting points with the numerical values corresponding to the areas of the reflecting points in the preset value area range as cornea reflecting points; and determining the gazing track of the user on the display device according to the relative offset between the pupil center position and the cornea reflecting point position.
In one example, the electronic acupuncture device further includes: the first shell is of a hollow structure with one open end, and the outer side of the other end of the first shell is connected with the acupuncture eye patch and used for fixing the acupuncture eye patch; the second shell is of a hollow structure with two open ends, wherein one end of the second shell is connected with the open end of the first shell; the third shell is of a hollow structure with one end open and the other end closed, wherein the open end of the third shell is connected with the other end of the second shell; the conductive column head is hollow, and one end of the conductive column head close to the face of a user is in an arc shape; the conductive column head material is conductive silica gel.
In one example, the electronic acupuncture controller is further used for determining one or more of the frequency, the amplitude and the duration of the electronic pulses generated by the user in the current electronic acupuncture process according to the historical treatment data of the user and the vision change data of the user.
On the other hand, the embodiment of the present application further provides a generating device of an electronic acupuncture device, including: the scanning module scans the face of a user in a three-dimensional mode to obtain a face model of the user; the recognition module is used for determining the position information of the plurality of corresponding eye acupuncture points on the face model according to the preset position relation of the plurality of eye acupuncture points on the face of the person; the generating module is used for generating a three-dimensional eyeshade model matched with the face model according to the face model of the user and the position information of the corresponding eye acupuncture points on the face model; wherein, the positions of the three-dimensional eyeshade model corresponding to the eye acupuncture points are provided with mounting holes for mounting conductive column heads; the conductive column head is connected with an electronic acupuncture controller in an electronic acupuncture device so as to transmit electronic pulses generated by the electronic acupuncture device to the conductive column head; the processing module is used for printing the three-dimensional eyeshade model through three-dimensional printing equipment to obtain an acupuncture eyeshade; wherein the acupuncture eye shield is part of an electronic acupuncture device.
On the other hand, this application embodiment still provides an electron acupuncture device, includes: the acupuncture eye patch is obtained by three-dimensionally scanning the face of a user to obtain a face model of the user, carrying out image recognition on the face model of the user through the preset position relation of a plurality of eye acupuncture points on the face of the user to obtain the position information of the plurality of eye acupuncture points corresponding to the face model, generating a three-dimensional eye patch model matched with the face model through the face model of the user and the position information of the plurality of eye acupuncture points corresponding to the face model, and printing the three-dimensional eye patch model matched with the face model through a three-dimensional printing device; and a plurality of mounting holes are respectively provided on the acupuncture eye mask at positions corresponding to a plurality of eye acupoints of the user; the conductive column head is arranged in the mounting hole and is in wireless or wired connection with an electronic acupuncture controller in the electronic acupuncture device so as to transmit the electronic pulse generated by the electronic acupuncture device to the conductive column head; an electro-acupuncture controller for generating an electronic pulse to cause the conductive studs to conduct electro-acupuncture to the user.
The calibration mode provided by the application can bring the following beneficial effects:
when a patient needs to treat myopia, amblyopia or other eye diseases through electronic acupuncture, a professional is not needed to position the acupuncture points, but the acupuncture points are automatically positioned through the printed acupuncture eyeshade, so that the electronic acupuncture eye-shade is convenient and quick, and the eye acupuncture points of the user are matched with the positions of the electronic acupuncture positions, namely the positions of the conductive column heads. And the acupuncture eye mask is different according to the facial structure of each patient, which makes the patient more comfortable when wearing.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the application and together with the description serve to explain the application and not to limit the application. In the drawings:
fig. 1 is a schematic flow chart illustrating a method for generating electronic acupuncture in an embodiment of the present application;
FIG. 2 is a schematic view of the acupuncture eye mask of the present embodiment;
fig. 3 is a sectional view of a conductive stud in an embodiment of the present application;
FIG. 4 is a schematic diagram of an electronic acupuncture controller according to an embodiment of the present application;
FIG. 5 is a schematic block diagram of an apparatus for generating electronic acupuncture according to an embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a face image in an embodiment of the present application;
FIG. 7 is a schematic diagram of an eyebrow image in an embodiment of the application;
wherein, 1, acupuncture eye-shade, 2, mounting hole, 3, observation hole.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be described in detail and completely with reference to the following specific embodiments of the present application and the accompanying drawings. It should be apparent that the described embodiments are only a few embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an embodiment of the present application provides a method for generating an electronic acupuncture device, including the following steps:
s101, three-dimensionally scanning the face of the user to obtain the face model of the user. Since different users have different facial structures, the positions of the eye acupuncture points on their faces are also different. In order to make the positioning of the acupuncture points of the eyes of the user more accurate, the face of the user can be scanned by a three-dimensional scanner (3D scanner) to obtain a model of the face of the user. Wherein, the user refers to a user who needs to acupuncture his eyes.
S102, determining the position information of the plurality of corresponding eye acupuncture points on the face model according to the preset position relation of the plurality of eye acupuncture points on the face of the person. After obtaining the face model of the user, image recognition may be performed on the face model to determine the locations of the eye acupuncture points on the face model, where the eye acupuncture point locations are related to the type of ocular disease control of the user.
Specifically, the acupuncture points of the eye correspond to the corresponding positions on the face, such as Qingming acupuncture point located at a concave position slightly above the inner canthus of the face, zanzhu acupuncture point located at a concave position on the inner side edge of the eyebrow, sizhu acupuncture point located at a concave position on the tip of the eyebrow, tongzao acupuncture point located beside the outer canthus of the face, chengqi acupuncture point located between the eyeball and the lower edge of the orbit, taiyang acupuncture point located above the extension line of the outer canthus on both sides of the forehead, retrobulbar acupuncture point located at the junction of 1/4 of the outer lower edge of the orbit and 3/4 of the inner lower edge of the orbit, and Yuyao acupuncture point located directly above the eyebrow of the forehead. Therefore, after the three-dimensional face model is subjected to image recognition, the positions of eyes, eyebrows and eye sockets can be recognized in the face image in the face model, and then the positions of the eye acupuncture points in the face model can be determined through the preset position relationship of the eye acupuncture points in the face, namely the position relationship between each eye acupuncture point and each organ. For example, in determining the location of the clear point, the location of the eyes can be first identified in the face model, and then the location of the inner canthus, i.e., where the upper and lower lid meet at the medial extremity, can be determined in the eyes. And further determining the center of the inner canthus, wherein the position of the center of the inner canthus close to the preset distance of the inner side of the face is the position of Qingming acupoint. The preset distance can be determined according to the size and age of the inner canthus of the user. For example, the sum of the distance between the center of the inner canthus and the farthest position from the edge of the inner canthus and the preset distance corresponding to the age of the user is the preset distance. When the Zhanzhu acupoint is determined, the position of the eyebrow is only needed to be identified in the face model, and then the upper edge of the inner end of the eyebrow is the position of the Zhanzhu acupoint. The other methods for determining the acupoints are similar to the above methods, and are not repeated herein.
As shown in fig. 6, in order to locate each acupuncture point, image processing needs to be performed on the face model, so that after the positions of human eyes and eyebrows are determined in the face model, each acupuncture point is located according to the positions of human eyes and eyebrows. For convenience of description, the areas of the human eyes and the eyebrows are collectively referred to as an eyebrow area.
It should be noted that, since the acquired face model of the user is a three-dimensional face model, image processing can be directly performed on the three-dimensional face model when image processing is performed. The two-dimensional image projected on the front surface of the three-dimensional face model may be acquired, and then the two-dimensional image may be subjected to image processing. For convenience of description, the two-dimensional image is referred to as a face image.
When image recognition is carried out, firstly, coarse positioning can be carried out on the eyebrow area by training a corresponding positioning model. When the positioning model is trained, a plurality of face images containing eyebrow areas can be collected in advance to serve as training samples, the face images serve as input, the eyebrow areas serve as output, and the model is trained. In order to reduce workload during recognition and reduce influence of color information on recognition, the face image can be grayed and then input into the model. In the following embodiments, unless otherwise specified, the images are all subjected to the graying processing.
The classifier can be obtained by training an AdaBoost learning algorithm and is realized through a multi-stage classifier structure. In the AdaBoost algorithm, each training sample is given a weight. In each iteration process, if a training sample can be correctly classified by the weak classifier of the current round, the weight of the sample needs to be reduced before learning the weak classifier of the next round, so that the importance of the sample is reduced. On the contrary, the weights of the samples misjudged by the weak classifiers in the current round are increased, so that a new round of training mainly surrounds the samples which cannot be correctly classified.
As shown in fig. 7, by performing coarse positioning on the face image, an eyebrow image including an eyebrow region can be acquired. In the eyebrow image, the eye area can first be located to determine the position of the eye area in the eyebrow image.
Wherein the eye region can be positioned from both the lateral and the longitudinal direction. Since the human eye is the most varying region in the face, both in the lateral and longitudinal directions, the human eye region can be located based on the gray level variation in the face.
When positioning the eye region from the lateral direction, a lateral operator may first be constructed from the size of the eyebrow image. When constructing the transverse operator, obtaining a pixel quantity index W according to the number of pixels of each line in the eyebrow image, and obtaining the transverse operator according to the difference of W. For example, W may be the number of pixels n included in each row divided by a fixed number and rounded, followed by another fixed number, and W is an odd number greater than 1.
In the case of eye patches created for the same user, the pixel count index W of the plurality of face images of the user is the same in the process of identifying the face image of the user. The pixel count index W may be the same or different for different users.
After obtaining W, if W has a value of 5, the lateral operator may be [1, 0, -1, -1], if W has a value of 9, the lateral operator may be [1, 0, -1, -1, -1, -1], and so on.
After the transverse operator is obtained, the transverse operator is convolved with the eyebrow image to obtain a transverse gray scale change curve capable of expressing the eyebrow image. In the eyebrow region, since the lateral direction of the eye includes structures such as the iris and the sclera, and the gray level change is more obvious than other positions, the maximum value in the lateral gray level change curve of the eyebrow image can be used as the center position of the eye region in the lateral direction. After the center position of the eye region in the transverse direction is determined, the upper boundary and the lower boundary of the eye region can be determined according to the center position, so that the position of the eye region in the transverse direction can be determined.
Specifically, the upper boundary and the lower boundary may be determined according to a maximum value in a lateral gray-scale variation curve of the eyebrow image. For example, the maximum value in the lateral gray-scale variation curve of the eyebrow image is respectively upward and downward at the position corresponding to the eyebrow image until reaching a preset proportion of the maximum value in the lateral gray-scale variation curve of the eyebrow image, for example, a half of the maximum value, as the upper boundary and the lower boundary of the eye region. I.e. the areas within the upper and lower boundaries, a lateral eye area is determined.
After the horizontal position of the eye area is determined, the eyebrow image can be intercepted according to the upper boundary and the lower boundary to obtain a horizontal position image determined according to the horizontal position, and the longitudinal position of the eye area is determined in the image.
When determining the longitudinal position of the eye region, firstly passing through the transverse position image, and setting the abscissa of each pixel in the transverse position image as x 0 Calculating the horizontal position image in the interval [ y 1 ,y 2 ]The vertical gray integration function above, the formula of the vertical gray integration function may be:
Figure GDA0003894879880000101
wherein, y 1 And y 2 The coordinates corresponding to the upper boundary and the lower boundary of the image are referred to, and the position of the image in the coordinate system may be arbitrary, for example, the lower left corner of the image is used as the origin, or the center point of the image is used as the origin, and the like, which is not limited herein. Because the structure of the eye region is relatively fixed, the brightness difference of the iris, sclera and other regions is relatively obvious, and therefore, the longitudinal gray scale integral function appears a peak or a trough at the boundary of the iris and the sclera. And combining the approximate positions of the eyes in the determined region in the prior knowledge, namely, taking the positions corresponding to the two peaks at the outermost side of the longitudinal gray scale integration function as the left boundary and the right boundary of the eye region in the longitudinal direction. The prior knowledge is to determine the approximate position of the eye region in the image according to the existing mature technology such as the physiological structure of the human body.
After the lateral position as well as the longitudinal position of the eye region, i.e. the upper, lower, left and right borders of the eye region in the eyebrow image, are determined, the eye region is determined. And then intercepting the eyebrow image to obtain an eye image. The eye image includes a left eye image and a right eye image, and for convenience of description, the left eye image and the right eye image are both referred to as the eye image because the left eye image and the right eye image are processed similarly in the following.
After the eye image is obtained, the edge of the eye in the eye image is identified to further determine the shape and position of the eye.
Specifically, the edge of the eye may first be coarsely located to determine the location of the edge pixels. A coordinate system is constructed in the eye image, the location of the origin of the coordinate system being not limited. Then, for each pixel in the eye image, the second-order directional derivative of each pixel in the gray-scale image I (x, y) in the gradient direction is calculated. Wherein the gradient direction is a direction perpendicular to the human eye edge. And then calculating according to the second-order directional derivative to obtain a Laplacian operator of each pixel point, and marking the pixel point of the Laplacian operator at the zero crossing point on the function image as an edge pixel.
The edge of the eye is positioned through a Laplacian operator, and the positioning precision of the edge can only be accurate to the pixel level. However, when an actual face is imaged in an image, the actual edge cannot be completely consistent with the edge of the image element. However, the proportion of the eye image in the whole face image is small, and if the accuracy of the acquisition device is low when the face image is acquired, a large error may be generated when the edge of the eye is determined. Therefore, after determining the positions of the edge pixels, the embodiment of the application further precisely positions the edge of the eye in each edge pixel to determine the sub-pixel positions of the eye.
Specifically, since the area of each pixel point is already small, the image representing the edge in each edge pixel can be regarded as a straight line in the edge pixel. The equation for the straight line is defined as: xcos α + ysin α = l. Wherein l is the distance between the coordinate center point and the edge, and alpha is the included angle between the edge gradient direction and the x axis. Then, the formula for the two-dimensional step edge can be defined as:
Figure GDA0003894879880000111
where a is the gray value inside the edge and b is the edge height. The sub-pixel position (x) of the two-dimensional step edge is centered on pixel (x, y) 1 ,y 1 ) Can be expressed as:
Figure GDA0003894879880000112
at this point, the position of the edge of the eye is determined.
After the positions of the edges of the eyes are determined in the eye images, the partial area, on the upper side of the upper boundary, of the eye area in the eyebrow image can be used as an eyebrow candidate area. And intercepting the area from the eyebrow image to be used as an eyebrow candidate image. And identifying in the eyebrow candidate image to determine the position of the eyebrow.
Specifically, after the eyebrow candidate image is obtained, the eyebrow candidate image can be subjected to image enhancement in a histogram equalization mode. And then obtaining a gray histogram of the eyebrow candidate image, selecting a plurality of candidate gray values in the gray histogram, and arranging the plurality of candidate gray values in a descending order to obtain a candidate gray value set. The gray value in the gray histogram as the trough can be selected as the gray value to be selected, that is, the value of the gray value to be selected in the gray histogram is lower than the values of the gray values at the two sides.
And then, carrying out binarization processing on the eyebrow images to be selected according to each gray value to be selected in the gray value set to be selected to obtain a plurality of eyebrow images to be selected after binarization processing. When the binarization processing is performed, it may be defined as: adjusting the gray value of the pixel with the gray value smaller than the to-be-selected gray value to be 255, namely white; and adjusting the gray value of the pixel with the gray value larger than or equal to the gray value to be selected to be 0, namely black. For convenience of description, the eyebrow candidate image after the binarization processing is referred to as a binarized eyebrow candidate image for short hereinafter. In the multiple images to be selected for the binary eyebrow, due to the difference of the gray values during the binary process, the contents presented by the images are also different, that is, the ranges of black and white in the images are different. And regarding each image to be selected for the binary eyebrow, if the pixel meets the condition that the gray value is smaller than the gray value corresponding to the image to be selected for the binary eyebrow, namely the white pixel in the image to be selected for the binary eyebrow, the pixel is called as the meeting pixel, and the region formed by the meeting pixels is called as the effective region, namely the white region in the image to be selected for the binary eyebrow. And the rest area is called as an invalid area, namely a black area in the binary eyebrow candidate image.
For each image to be selected for the binary eyebrow, if the area ratio of the effective area to the ineffective area is smaller than a preset threshold value, for example, 2/3, it indicates that the area of the effective area in the image to be selected for the binary eyebrow is not too large, the gray value to be selected corresponding to the image to be selected for the binary eyebrow during the binary processing can be recorded as the effective gray value, and the image to be selected for the binary eyebrow corresponding to all the effective gray values can be obtained.
And carrying out image fusion on the binary eyebrow images to be selected corresponding to all the effective gray values. In the merging process, if the first effective region is completely included in the second effective region, the second effective region and the first effective region are merged into the second effective region, and in this case, the second effective region is referred to as including the first effective region. Similarly, if the second effective region is completely included in the third effective region, the second effective region, the first effective region, and the third effective region are merged into the third effective region, and in this case, the third effective region is referred to as including the first effective region and the second effective region. After all the binary eyebrow candidate images are fused, if the number of the effective areas included in one effective area is greater than a preset threshold value, for example, greater than 3, the effective area after the fusion is called a candidate effective area.
According to the formula
Figure GDA0003894879880000131
And calculating the information entropy of all the candidate effective areas. Where H (A) is the information entropy of the candidate effective region, p (x) j ) Is the probability of a gray value of j in the candidate valid region. Since eyebrows contain more information than skin, information can be included in each candidate effective regionThe region with the largest entropy is used as the eyebrow region. After the eyebrow region is obtained, the edge of the eyebrow can be determined in the eyebrow region by a method similar to the method for determining the edge of the eye in the eye region in the above embodiment. At this time, the determination of the positions and shapes of the eyebrows and eyes is completed.
And then, according to the position relation among the acupuncture points, the eyes and the eyebrows, the positions of the acupuncture points in the face image can be determined. And then according to the corresponding relation between the face image and the face model, the positions of the acupuncture points on the three-dimensional face model of the user are determined.
When the acupuncture points are determined according to the position relation between each acupuncture point and eyes and eyebrows, the acupuncture points can be directly determined on a face image or a face model according to a preset position relation, a grid can be constructed on the face image, and then the positions of the acupuncture points are determined according to the grid. For example, the face image may be divided into a rows in the horizontal direction and B columns in the vertical direction, so as to obtain a face image of an a × B grid. Because the edges of the eyes and the eyebrows are determined in the face image, namely, the grids corresponding to the edges of the eyes and the eyebrows in the grid are determined. At this time, the grids corresponding to the acupuncture points can be obtained according to the position relationship between the preset grids where the acupuncture points are located and the grids at the edges of the eyebrows, and the positions of the acupuncture points are determined.
In addition, the positions of the acupuncture points can be directly identified by training the corresponding identification model except that after the positions and the shapes of the eyes and the eyebrows are determined by carrying out image identification on the face model, the positions of the acupuncture points are determined according to the position relation between the acupuncture points and the eyes and the eyebrows.
Specifically, a plurality of three-dimensional face models, for which eye acupuncture points have been determined, may be collected in advance as training samples. When training samples are collected, a plurality of corresponding three-dimensional face models can be obtained by three-dimensionally scanning the faces of a plurality of users, then the positions of the eye acupuncture points can be determined on the three-dimensional face models manually, and the three-dimensional face models with the determined positions of the eye acupuncture points are used as the training samples for training the recognition models. The recognition model is then trained by a corresponding algorithm. The algorithm used may be a Convolutional Neural Network (CNN) or a Deep Neural Network (DNN), and the like, which is not further limited herein. After the recognition model is trained, a three-dimensional face model of the user may be collected. And then, through the trained recognition model, recognizing the positions of the eye acupuncture points on the three-dimensional face model.
It should be noted that, the above-mentioned device for determining the eye acupuncture points through image recognition or recognition models may be locally recognized by a corresponding device, or may be recognized by a server after the local device sends the acquired three-dimensional face model to the server, which is not limited herein.
S103, generating a three-dimensional eyeshade model matched with the face model according to the face model of the user and the position information of the corresponding eye acupuncture points on the face model; wherein, the positions of the three-dimensional eyeshade model corresponding to the eye acupuncture points are provided with mounting holes for mounting conductive column heads; the conductive column head is connected with an electronic acupuncture controller in an electronic acupuncture device, so that an electronic pulse generated by the electronic acupuncture device is transmitted to the conductive column head. After the positions of the eye acupuncture points of the user on the three-dimensional face model are obtained according to the three-dimensional face model of the user, the three-dimensional eye mask model matched with the three-dimensional face model of the user can be generated, so that the user can be more attached to the face structure of the user when wearing the eye mask to perform electronic acupuncture, and the wearing comfort of the user is improved. In addition, the face structure of the user is fitted, so that the subsequent electronic acupuncture and moxibustion on the user can be more accurately positioned, and the treatment effect of the electronic acupuncture and moxibustion is improved.
Specifically, when the eye mask model is generated, a three-dimensional eye mask model matched with the structure of the user can be generated according to the three-dimensional face model of the user, and the three-dimensional eye mask model can cover the eye acupuncture points of the user. After the three-dimensional eyeshade model is generated, the mounting holes 2 for mounting the conductive column heads can be arranged at the corresponding positions of the three-dimensional eyeshade model according to the positions of the eye acupuncture points on the three-dimensional face model. Wherein, each mounting hole 2 corresponds to the position of each eye acupuncture point of the face of the user.
S104, printing the three-dimensional eyeshade model through three-dimensional printing equipment to obtain an acupuncture eyeshade; wherein the acupuncture eye shield is part of an electronic acupuncture device. After the three-dimensional eye patch model is generated, the acupuncture eye patch 1 can be printed by a three-dimensional printing device. As shown in fig. 2, the acupuncture mask 1 printed by the three-dimensional printing apparatus matches the facial structure of the user, making the user more comfortable when wearing.
After the acupuncture eyeshade 1 is obtained, the conductive column heads can be installed in the installation holes 2 on the acupuncture eyeshade 1 manually or by corresponding production line equipment. As shown in fig. 3, the conductive column head has a hollow structure, and one end close to the user is arc-shaped, so that the user feels more comfortable when wearing the conductive column head. The material of electrically conductive column cap can adopt electrically conductive silica gel, not only can electrically conduct so that the user accepts electron acupuncture, also can make the user wear acupuncture eye-shade 1 more comfortable.
After each conductive column head is installed on the acupuncture eye mask 1, each conductive column head can be connected with an electronic acupuncture controller capable of generating electronic pulses, so that the electronic acupuncture controller transmits the electronic pulses to the conductive column head in a wired manner; the electronic acupuncture controller may also transmit the electronic pulse to the conductive column head in a wireless manner, for example, by electromagnetic waves. After the user wears the acupuncture eye mask 1, the electronic acupuncture can be performed by the electronic acupuncture controller. As shown in fig. 4, the electronic acupuncture controller includes a processor connected to a power source through an a/D converter and connected to a pulse generator through an a/D converter. Wherein the A/D converter is used for A/D conversion, and the pulse generator is used for generating electronic pulses. The power supply may be a built-in power supply, or an external ac or dc power supply, which is not limited herein. In operation, the processor receives a user-sent command and sends a pulse command to the pulse generator through the A/D converter, wherein the pulse command may include one or more of the amplitude, duration and frequency of the pulse. After receiving the pulse command, the pulse generator generates a corresponding electronic pulse and transmits the electronic pulse to the corresponding conductive column head.
Specifically, the start and stop of the pulse generator, the duration of the start, the frequency of the transmitted pulses, and the like may be controlled by control buttons provided on the surface of the electronic acupuncture controller. Of course, a wireless transmission module connected with the processor can be added in the electronic acupuncture controller to control the acupuncture instrument through remote control equipment. For example, when a user wants to start the acupuncture instrument, a start instruction can be sent to the server through the corresponding APP on the smart phone, and the start instruction contains the unique identification code of the electronic acupuncture controller to be started by the user. And the server analyzes the received starting instruction, determines the electronic acupuncture controller corresponding to the unique identification code and sends the starting instruction to the electronic acupuncture controller. After the electronic acupuncture controller receives the instruction through the wireless receiving module, the processor controls the pulse generator to work and generate electronic pulses. Certainly, a bluetooth module can be arranged in the electronic acupuncture controller, so that a user can directly control the electronic acupuncture controller through bluetooth without passing through a server.
It should be noted that the electronic acupuncture controller and the acupuncture mask 1 may be used as two separate devices, or the electronic acupuncture controller and the acupuncture mask 1 may be integrated into a device, wherein for convenience of description, the integrated device is referred to as an electronic acupuncture device. The electronic acupuncture controller may further include a housing and a display device in addition to the acupuncture eye cup 1 and the electronic acupuncture controller. Of course, in order to fit the display device, corresponding observation holes 3 may be provided in the acupuncture mask 1 corresponding to the positions of the eyes of the user so that the user can observe. The acupuncture mask 1 is disposed outside the casing, and the display device and the electronic acupuncture controller are disposed inside the casing. The display device may include a convex lens for enlarging a user's viewing range, and a display screen connected to the electronic acupuncture controller for displaying a corresponding picture under the control of the electronic acupuncture controller. The electronic acupuncture device may further include a sound device connected to the electronic acupuncture controller and playing a preset audio file under the control of the electronic acupuncture controller. By playing the preset audio file, the emotion of the user can be relieved, and the treatment effect is increased.
Specifically, the housing is divided into three sections. The first shell is hollow, one end of the first shell is an opening end and is connected with the second shell, and the outer side of the other end of the first shell is connected with the acupuncture eye patch 1; when the display device is arranged in the electronic acupuncture device, the acupuncture eye patch 1 is provided with an observation hole 3, and at the moment, the other end of the first shell can also be provided with an opening corresponding to the observation hole 3; the second shell is a hollow structure with two open ends, one end is connected with the first shell, the other end is connected with the second shell, and a convex mirror is fixed in the second shell; the third shell is an internal hollow structure with one open end, the open end is connected with the second shell, and a display screen and an electronic acupuncture controller are fixed in the third shell. The first shell and the acupuncture eye mask 1, and the parts of the shell may be connected by a screw structure, or may be connected by other realizable manners such as adhesion, and the like, which is not limited herein. When the electronic acupuncture controller is disposed in the housing, a corresponding control button may be disposed on a surface of the housing so as to be turned on by a user. Of course, the electronic acupuncture controller can also be controlled in a wireless manner, which is not further described herein.
It should be noted that the above-mentioned electronic acupuncture device is only an example of the electronic acupuncture device in the present embodiment. In actual production, the appearance of the electronic acupuncture device and the position of the corresponding structure may be different from the scheme described in the embodiments of the present application. It should be understood by those skilled in the art that the above-described variations are within the scope of the embodiments of the present application as long as they are within the skill of those skilled in the art.
In one embodiment, the electronic acupuncture device further comprises an eye movement tracking device. The eye tracking device may be arranged inside the housing, for example in the first housing, at a position corresponding to the viewing aperture 3; the eye tracking device may be disposed outside the first housing at a position corresponding to the observation hole 3. Of course, the position of the eye movement trajectory of the user may be acquired, and the position may be included in the scope of the embodiment of the present application. The eye movement tracking device is connected with the electronic acupuncture controller, when a preset picture is displayed on the display screen, the eye movement tracking device tracks the movement track of the eyeballs of the user and sends the movement track to the electronic acupuncture controller, and the electronic acupuncture controller determines whether the user is a vision-impaired user. Wherein the visual disorder comprises myopia, amblyopia, abnormal watching, abnormal following, heterophoria, etc. For example, a user is first presented on a display screen to watch an object moving on the screen on a screen displayed next. For example, the object may be a sphere. Then, the display screen starts to display a pre-stored picture, which may be a white sphere moving according to a preset track under a black background. At this time, the eye movement tracking device and the electronic acupuncture controller can track the movement track of the eyeballs of the user so as to judge whether the user can stare the moving ball in the picture and determine whether the user is a vision-impaired user. If yes, the specific type of the vision disorder of the user can be determined according to the judgment result, and when the electronic acupuncture is performed on the vision disorder, the electronic acupuncture can be performed on the corresponding eye acupuncture points according to the preset treatment scheme and the preset frequency and amplitude of the electronic pulse.
Specifically, the method for tracking the movement track of the eyeball of the user by the eye tracking device may be as follows:
the infrared camera is arranged in the eye movement tracking device, and can continuously acquire pictures containing the eyes of the user when the user watches the display screen and then send the pictures to the electronic acupuncture controller. The electronic acupuncture controller may process the picture locally, or transmit the picture to a server for processing, which is not limited herein. When the picture is processed, pupil position data of the user is determined through the eye image, and a watching track of the user on the display screen is obtained according to the pupil position data.
Wherein the pupil position data may be obtained by:
firstly, according to a preset step gray level threshold value of an eye image, determining different threshold values to segment the eye image, obtaining segmented regions and mutual wrapping characteristics among the segmented regions, and extracting an eyeball region.
The mutual wrapping property of the segmented regions mentioned herein refers to the spatial feature of sequentially wrapping the sclera, the iris and the pupil from the outside to the inside in the eyeball region wrapped by the upper and lower eyelids.
Because the eyeball areas wrapped by the upper eyelid and the lower eyelid are the sclera, the iris and the pupil from outside to inside in sequence, the gray scales of the three areas are reduced in sequence. Therefore, by means of the gray-scale step-type distribution characteristics of the sclera, the iris and the pupil and the mutual wrapping characteristics of the sclera, the iris and the pupil, the eyeball area can be extracted by setting a proper step gray-scale threshold value and judging the mutual wrapping characteristics among the areas divided by different threshold values.
After the eyeball area is extracted, selecting a lowest gray value point in the eyeball area as a seed point for extracting the pupil area, and then obtaining the complete pupil area through a preset growth threshold value and a boundary condition and any existing area growth algorithm. And calculating the central coordinate of the pupil area according to the pupil area, wherein the central coordinate is the pupil central coordinate.
After the pupil area of the tester is detected, a cornea reflection point is searched near the pupil area, wherein the cornea reflection point is a light spot formed on the surface of an eyeball by an infrared point light source in the eye tracking equipment. And because the gray level of the reflective point is far higher than that of the surrounding area, the reflective point area is extracted by setting a proper threshold value. And obtaining the coordinates of the reflecting points according to the reflecting point area.
And obtaining pupil position data according to the pupil coordinates and the cornea reflection point coordinates.
It should be noted that the corneal reflection point is generally the region with the highest brightness on the eye image, so the eye image may also be directly binarized by a preset threshold, and the appropriate preset threshold is used, for example, the preset threshold is the gray value 230. At the moment, the reflecting points of the cornea are completely separated, and a binary image containing the reflecting points is obtained.
However, in the binarized image containing the reflective dots, the reflective dots may appear on the glasses due to the glasses wearing of the tester, so that the reflective dots in the binarized image containing the reflective dots may not only include the corneal reflective dots, thereby affecting the determination of the corneal reflective dots.
Therefore, it is also necessary to define the contour area of the corneal reflection point so as to eliminate interference of the spectacle reflection point and the like. Specifically, in the binarized image containing the reflective points, the areas of all the reflective points are calculated, and the reflective points with the areas within a preset value range are used as cornea reflective points. Conversely, the reflection points not in the above-mentioned region serve as interference points.
In the present specification, the pupil position data refers to a relative offset amount between a pupil center position and a corneal reflection point position in an eye image. When the pupil area contains a plurality of corneal reflection points, the pupil position data may be a relative shift amount between a pupil center position in the eye image and a mass point of a pattern surrounded by the plurality of corneal reflection points. For example, the position coordinates of the pupil center in the coordinate system are determined by taking the mass point of the figure surrounded by at least one cornea reflection point in the eye image as the center of the circle, and the coordinate system is in units of pixels, and the position coordinates are the pupil position data mentioned in the embodiments of the present application.
After obtaining the pupil position data of the user, the motion trajectory of the user on the display screen can be judged according to the pupil position data, and the specific method is as follows:
the calculated pupil position data can be used as known data, and the fixation point position coordinate of the user on the display screen is calculated through a polynomial fitting algorithm.
Specifically, taking a quadratic polynomial as an example, the coordinates (X) of the position of the gazing point of the tester on the display screen are calculated P ,Y P ):
Figure GDA0003894879880000191
Wherein, the above formula (x) P ,y P ) The pupil position data in the present specification. And in the case where the eye tracking device is a single light source for a single camera, (x) P ,y P ) Is the relative offset between the pupil center and the corneal glint position in the eye image; in the case where the eye tracking device is a single camera with multiple light sources, (x) P ,y P ) Is the relative offset of the pupil center in the eye image and the mass point of the figure surrounded by a plurality of cornea reflecting points.
A in the above formula 0 ~a 11 The unknown coefficients to be determined can be obtained by a calibration procedure. The calibration process needs to be performed before the visual attention test, and the position coordinates for sequentially displaying the calibration points, i.e., the fixation point position coordinates (X) in the above formula are known, are preset at a plurality of positions on the display screen P ,Y P ). For each calibration point, the calibration point is checked by a tester to obtain pupil position data of the corresponding calibration point checked by the tester, namely x in the formula is obtained P And y P . Thus, can be obtained as 0 ~a 11 A set of equations shown above as unknowns. Therefore, by setting a plurality of calibration points, a can be calculated 0 ~a 11 . In the calibration process, at least 3 calibration points are included, and the more the calibration points are, the higher the precision of the position coordinates of the fixation point is calculated.
After the movement track of the user on the display screen is obtained, the movement track can be compared with the movement track of the ball body in the preset picture, so that whether the user is a vision-impaired user or not is judged. For some obvious vision impairment manifestations, this can be determined by observing the user's eye. However, some vision impairment manifestations which are not sufficiently obvious, such as recessive strabismus, cannot be determined by observation. At the moment, whether the user suffers from the visual disturbance can be definitely determined through the device in the embodiment of the application, and the corresponding treatment is carried out on the user, so that the device is convenient and accurate.
Certainly, when the electronic acupuncture and moxibustion is performed on the vision disorder user, a corresponding multimedia file can be played on the display screen to help the user to train. For example, users with amblyopia may be treated with a raster therapy. The user can observe the multimedia file played on the display screen to make the visual cells in the eyes receive stimulation in all directions, thereby strengthening the visual central nerve cells and improving the eyesight. The multimedia file may include a lower layer pattern and an upper layer bar, and the upper layer bar may be a bar pattern that is black and white, and rotates at a certain speed.
In one embodiment, the eyebrow and eye images acquired by the eye tracking device may be unclear due to head shaking during the process of wearing the device by the user, so that the subsequent identification of the eyebrow and eye images is affected. Therefore, when the server acquires the eyebrow images, whether the acquired eyebrow images are fuzzy images or not can be judged firstly, and when the eyebrow images are judged to be fuzzy images, the determined fuzzy images are subjected to image deblurring processing, so that the definition of the images is improved, and the accuracy of subsequent image identification is improved.
Specifically, first, the server may be based on the fact that processing a non-blurred image may destroy the original quality of the image
Figure GDA0003894879880000201
Determining a gradient map of the eyebrow image and based thereon
Figure GDA0003894879880000202
And judging whether the eyebrow image is a blurred image. Wherein, g x (i, i) and g y (i, j) are gradient diagrams of the eyebrow eye image f in the x and y directions respectively, m, n are the number of rows and columns of the eyebrow eye image f in the x and y directions respectively, G num Is the sum of the number of non-zero gradient values of the x-direction gradient map and the y-direction gradient map. When S is<And 7, the server can judge the eyebrow image as a fuzzy image. The value 7 can be determined experimentally.
Secondly, the server can be based on
Figure GDA0003894879880000211
Figure GDA0003894879880000212
And
Figure GDA0003894879880000213
and determining a foreground blurred image in the blurred image. Wherein q (x, y) is a foreground blurred image, c is a third preset value, d is a fourth preset value, and N h Is the total number of pixels in the neighborhood of the pixel with (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with (x, y) in the blurred image, I (s, t) is the grayscale value of the pixel with (x, y) in the blurred image, and m (x, y) is the mean value of I (x, y).
And finally, the server can process the determined foreground blurred image by adopting Gaussian filtering to obtain a foreground clear image, and then the foreground clear image is used as an eye use image subjected to image deblurring processing to carry out image recognition.
The image processing method can separate the foreground image part in the blurred image, namely the default user's fixation from the original image so as to process the foreground image part, recover the definition of the image and reduce the workload of equipment.
In one embodiment, if the user has performed several times of electronic acupuncture before the current electronic acupuncture, historical treatment data and vision change data of the user may be collected to determine whether the expected effect is achieved by performing the several times of electronic acupuncture before. Wherein the vision variation data may include at least one of naked eye vision and corrected vision. The vision change data can be input by a user, or can be tested by a corresponding vision testing device and then sent to the electronic acupuncture controller. After the electronic acupuncture controller collects the vision change data of the user, the user information and the vision change data of the user are sent to the server. The server can determine historical treatment data of the user according to the user information and then determine whether the current eyesight of the user meets a preset treatment target according to the eyesight change data. If the desired treatment goal has been achieved, treatment may continue in the previous treatment regimen. If the expected treatment target is not reached, the treatment strategy can be modified according to the current vision value of the user and fed back to the electronic acupuncture controller. Of course, the treatment strategy can also be modified by the user himself via the corresponding APP. The historical treatment data and the treatment strategy comprise the frequency, amplitude and duration of electronic pulses generated by the acupuncture instrument in the historical treatment process of the user. For example, if the user's vision before the treatment should reach 1.0, but the actual condition only reaches 0.8, according to the previously predetermined treatment goal, the frequency, amplitude or duration of the electrical pulses generated may be increased as appropriate based on the historical treatment data.
When the patient need through electron acupuncture treatment myopia, amblyopia or other eye diseases, no longer need the professional to fix a position the acupuncture point, but fix a position the acupuncture point through the acupuncture eye-shade that the printing obtained, not only convenient and fast, user's eye acupuncture point and electron acupuncture position promptly the conductive column cap position more match moreover. And the acupuncture eye mask is different according to the facial structure of each patient, which makes the patient more comfortable when wearing.
On the other hand, as shown in fig. 5, the present application also provides a generating device of an electronic acupuncture device including an acupuncture instrument for generating electronic pulses, the device comprising:
the scanning module 201 scans the face of the user in three dimensions to obtain the user face model;
the recognition module 202 is used for determining the position information of a plurality of eye acupuncture points corresponding to the face model according to the position relation of the preset eye acupuncture points on the face of the person;
a generating module 203 for generating a three-dimensional eyeshade model matched with the face model according to the face model of the user and the position information of the corresponding eye acupuncture points on the face model; wherein, the positions of the three-dimensional eyeshade model corresponding to the eye acupuncture points are provided with mounting holes for mounting conductive column heads; the conductive column head is connected with an electronic acupuncture controller in an electronic acupuncture device so as to transmit electronic pulses generated by the electronic acupuncture device to the conductive column head;
the processing module 204 is used for printing the three-dimensional eyeshade model through three-dimensional printing equipment to obtain an acupuncture eyeshade; wherein the acupuncture eye shield is part of an electronic acupuncture device.
The above description is merely one or more embodiments of the present disclosure and is not intended to limit the present disclosure. Various modifications and alterations to one or more embodiments of the present description will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement or the like made within the spirit and principle of one or more embodiments of the present specification should be included in the scope of the claims of the present specification.

Claims (6)

1. A generation method of a personalized electronic acupuncture device, characterized by comprising the following steps:
the method comprises the steps of three-dimensionally scanning the face of a user to obtain a face model of the user;
determining position information of a plurality of corresponding eye acupuncture points on the face model according to the preset position relation of the eye acupuncture points on the face of a person;
generating a three-dimensional eyeshade model matched with the face model according to the face model of the user and the position information of a plurality of corresponding eye acupuncture points on the face model; wherein, the positions of the three-dimensional eyeshade model corresponding to the eye acupuncture points are provided with mounting holes for mounting conductive column heads; the conductive column head is connected with an electronic acupuncture controller in an electronic acupuncture device so as to transmit electronic pulses generated by the electronic acupuncture device to the conductive column head;
printing the three-dimensional eyeshade model through three-dimensional printing equipment to obtain an acupuncture eyeshade; wherein the acupuncture eye shield is part of an electronic acupuncture device;
determining the position information of the plurality of eye acupuncture points corresponding to the face model according to the preset position relation of the plurality of eye acupuncture points on the face of the person, and specifically comprises the following steps:
determining an eyebrow image of the user by adopting a coarse positioning mode for the face model; wherein the eyebrow image is an image containing eyebrows and eye areas of the user;
carrying out gray processing on the eyebrow eye image; constructing a transverse operator according to the size of the eyebrow image subjected to the gray processing, wherein the transverse operator is an odd number;
convolving the transverse operator with the eyebrow image subjected to the gray processing to obtain a transverse gray variation curve of the eyebrow image; taking the maximum value of the transverse gray scale change curve of the eyebrow eye image as the transverse center position of the eye area; taking two positions which are respectively upward and downward along the longitudinal direction until reaching a preset proportion as an upper boundary and a lower boundary of the eye area at the transverse center position of the eye area; intercepting the eyebrow image according to the upper boundary and the lower boundary of the eye area to obtain a transverse position image of the eye area;
calculating a longitudinal gray scale integration function between the upper boundary and the lower boundary for each pixel in the left half image or the right half image of the transverse position image to obtain a longitudinal gray scale integration function image; in all the wave crests and wave troughs of the longitudinal gray scale integral function image, taking the positions corresponding to the wave crests or wave troughs on the leftmost side and the rightmost side in the transverse position image as the left boundary and the right boundary of the eye area in the longitudinal direction;
intercepting the transverse position image according to the left boundary and the right boundary of the eye region, and determining the eye region on the face model;
according to the determined eye areas and the corresponding position relations of the eye acupuncture points on the human face and the eye areas, determining the position information of the corresponding eye acupuncture points on the face model;
after determining the eyebrow image of the user by adopting a rough positioning mode for the face model, the method further comprises the following steps:
according to
Figure FDA0003894879870000021
And
Figure FDA0003894879870000022
when S is<When 7, judging the eyebrow and eye images to be fuzzy images, wherein g x (i, j) and g y (i, j) are gradient diagrams of the eyebrow eye image f in the x and y directions, respectively, m, n are the number of lines and columns of the eyebrow eye image f in the x and y directions, respectively, and G num Is the sum of the number of non-zero gradient values of the gradient map in the x direction and the gradient map in the y direction;
according to
Figure FDA0003894879870000023
Figure FDA0003894879870000024
And
Figure FDA0003894879870000025
determining a foreground blurred image in the blurred images, wherein q (x, y) is the foreground blurred image, c is a third preset value, d is a fourth preset value, and N is h Is the total number of pixels in the neighborhood of the pixel with position (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with position (x, y) in the blurred image, I (s, t) is the gray value of the pixel with position (x, y) in the blurred image, and m (x, y) is the mean value of I (x, y);
processing the determined foreground blurred image by adopting Gaussian filtering to obtain a foreground clear image which is used as the eyebrow image subjected to image deblurring;
an observation hole is formed in the three-dimensional eyeshade model at a position corresponding to the eyes of the user;
the electronic acupuncture device further comprises:
the display device is connected with the electronic acupuncture controller, and the position of the display device corresponds to that of the observation hole;
the eye movement tracking device is connected with the electronic acupuncture controller and used for acquiring an eye image when the user watches a preset picture when the display device displays the preset picture, sending the eye image to the electronic acupuncture controller, enabling the electronic acupuncture controller to obtain a watching track of the user on the display device according to the eye image, and determining whether the user is a vision-impaired user or not based on the preset picture and the watching track;
the display device is also used for receiving a playing instruction sent by the electronic acupuncture controller when the electronic acupuncture controller determines that the vision disorder type of the user is amblyopia, and playing a multimedia file according to the playing instruction; the picture corresponding to the multimedia file comprises a lower layer pattern and an upper layer bar; the upper layer bar grating is a bar grating pattern which is alternate in black and white and rotates at a preset speed;
the electronic acupuncture device further includes:
the first shell is of a hollow structure with one open end, and the outer side of the other end of the first shell is connected with the acupuncture eyepatch and used for fixing the acupuncture eyepatch;
the second shell is of a hollow structure with two open ends, wherein one end of the second shell is connected with the open end of the first shell;
the third shell is of a hollow structure with one end open and the other end closed, wherein the open end of the third shell is connected with the other end of the second shell;
the conductive column head is hollow, and one end of the conductive column head close to the face of a user is in an arc shape; the conductive column head material is conductive silica gel.
2. The method of claim 1, wherein the coarse positioning comprises: determining the eyebrow image of the user in the face image corresponding to the face model through the trained positioning model; and when the positioning model is trained, the facial images of a plurality of users are input, and the eyebrow images of the users are output.
3. The method of claim 1, wherein after determining the eye region on the face model, the method further comprises:
roughly positioning the edge of the image corresponding to the eye area to obtain edge pixels, and representing the image representing the edge in each edge pixel by a straight line xcos alpha + ysin alpha = l; wherein l is the distance between the coordinate center point and the edge, and alpha is the included angle between the edge gradient direction and the x axis;
step two-dimensional
Figure FDA0003894879870000041
Sub-pixel position (x) 1 ,y 1 ) As the eye edge positions, where a is the gray value inside the edge and b is the edge height.
4. The method of claim 1, wherein after determining the eye region on the face model, the method further comprises:
taking the part of the eyebrow image without the eye area as an eyebrow candidate image;
selecting a plurality of gray values to be selected according to the gray histogram of the image to be selected of the eyebrows, and carrying out binarization processing on the image to be selected of the eyebrows according to the plurality of gray values to be selected to obtain a plurality of images to be selected of the eyebrows after binarization processing; during binarization processing, an area formed by pixels with gray values smaller than a gray value to be selected is called an effective area;
performing image fusion on the plurality of binaryzation-processed eyebrow images to be selected to obtain fused images; when the first effective region is completely contained in the second effective region during image fusion, the second effective region is called as containing the first effective region; and the effective region contained in the fused image is called as a fusion effective region;
for each fusion effective region in the fused image, if the number of the effective regions contained in the fusion effective region is greater than a preset threshold value, the fusion effective region is called a candidate effective region;
determining a candidate effective region where the eyebrow is located according to the information entropy of each candidate effective region in the fused image, and calling the candidate effective region where the eyebrow is located as an eyebrow region;
according to the determined eye regions and the corresponding position relations of the eye acupuncture points on the human face and the eye regions, determining the position information of the corresponding eye acupuncture points on the face model, specifically comprising:
and determining the position information of the corresponding eye acupuncture points on the face model according to the determined eye area, the determined eyebrow area and the corresponding position relationship between the eye acupuncture points and the eyebrow area on the human face.
5. An apparatus for generating an electronic acupuncture device, comprising:
the scanning module scans the face of a user in a three-dimensional mode to obtain a face model of the user;
the recognition module is used for determining the position information of the plurality of corresponding eye acupuncture points on the face model according to the preset position relation of the plurality of eye acupuncture points on the face of the person;
the generating module is used for generating a three-dimensional eyeshade model matched with the face model according to the face model of the user and the position information of the corresponding eye acupuncture points on the face model; wherein, the positions of the three-dimensional eyeshade model corresponding to the eye acupuncture points are provided with mounting holes for mounting conductive column heads; the conductive column head is connected with an electronic acupuncture controller in an electronic acupuncture device so as to transmit electronic pulses generated by the electronic acupuncture device to the conductive column head;
the processing module is used for printing the three-dimensional eyeshade model through three-dimensional printing equipment to obtain an acupuncture eyeshade; wherein the acupuncture eye shield is part of an electronic acupuncture device;
the method for determining the position information of the plurality of eye acupuncture points corresponding to the face model according to the position relation of the plurality of preset eye acupuncture points on the face of the person specifically comprises the following steps:
determining an eyebrow image of the user by adopting a coarse positioning mode for the face model; wherein the eyebrow image is an image containing eyebrows and eye areas of the user;
carrying out gray processing on the eyebrow eye image; constructing a transverse operator according to the size of the eyebrow image subjected to the gray processing, wherein the transverse operator is an odd number;
convolving the transverse operator with the eyebrow image subjected to the gray processing to obtain a transverse gray variation curve of the eyebrow image; taking the maximum value of the transverse gray scale change curve of the eyebrow eye image as the transverse central position of the eye area; at the transverse center position of the eye area, taking two positions which are respectively upward and downward along the longitudinal direction until a preset proportion is reached as an upper boundary and a lower boundary of the eye area; intercepting the eyebrow image according to the upper boundary and the lower boundary of the eye area to obtain a transverse position image of the eye area;
calculating a longitudinal gray scale integration function between the upper boundary and the lower boundary for each pixel in the left half image or the right half image of the transverse position image to obtain a longitudinal gray scale integration function image; in all the wave crests and wave troughs of the longitudinal gray scale integral function image, taking the positions corresponding to the wave crests or wave troughs on the leftmost side and the rightmost side in the transverse position image as the left boundary and the right boundary of the eye area in the longitudinal direction;
intercepting the transverse position image according to the left boundary and the right boundary of the eye area, and determining the eye area on the face model;
according to the determined eye regions and the corresponding position relations of the eye acupuncture points on the human face and the eye regions, determining the position information of the corresponding eye acupuncture points on the face model;
after the face model is roughly positioned and the eyebrow image of the user is determined, the method further comprises the following steps:
according to
Figure FDA0003894879870000061
And
Figure FDA0003894879870000062
when S is<When 7, judging the eyebrow and eye images to be fuzzy images, wherein g x (i, j) and g y (i, j) are gradient diagrams of the eyebrow eye image f in the x and y directions respectively, m, n are the number of rows and columns of the eyebrow eye image f in the x and y directions respectively, G num Is the sum of the number of non-zero gradient values of the gradient map in the x direction and the gradient map in the y direction;
according to
Figure FDA0003894879870000063
Figure FDA0003894879870000064
And
Figure FDA0003894879870000065
determining a foreground blurred image in the blurred image, wherein q (x, y) is the foreground blurred image, c is a third preset value, d is a fourth preset value, and N is h Is the total number of pixels in the neighborhood of the pixel with (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with (x, y) in the blurred image, I (s, t) is the gray value of the pixel with (x, y) in the blurred image, and m (x, y) is the mean value of I (x, y);
processing the determined foreground blurred image by adopting Gaussian filtering to obtain a foreground clear image which is used as an eyebrow image subjected to image deblurring;
an observation hole is formed in the three-dimensional eyeshade model at a position corresponding to the eyes of the user;
the electronic acupuncture device further includes:
the display device is connected with the electronic acupuncture controller, and the position of the display device corresponds to that of the observation hole;
the eye movement tracking device is connected with the electronic acupuncture controller and used for acquiring an eye image when the user watches a preset picture when the display device displays the preset picture, and sending the eye image to the electronic acupuncture controller, so that the electronic acupuncture controller obtains a watching track of the user on the display device according to the eye image, and determines whether the user is a vision-impaired user based on the preset picture and the watching track;
the display device is also used for receiving a playing instruction sent by the electronic acupuncture controller when the electronic acupuncture controller determines that the vision disorder category of the user is amblyopia, and playing a multimedia file according to the playing instruction; the picture corresponding to the multimedia file comprises a lower layer pattern and an upper layer bar; the upper layer bar grating is a bar grating pattern which is alternate in black and white and rotates at a preset speed;
the electronic acupuncture device further comprises:
the first shell is of a hollow structure with one open end, and the outer side of the other end of the first shell is connected with the acupuncture eyepatch and used for fixing the acupuncture eyepatch;
the second shell is of a hollow structure with two open ends, wherein one end of the second shell is connected with the open end of the first shell;
the third shell is of a hollow structure with one end open and the other end closed, wherein the open end of the third shell is connected with the other end of the second shell;
the conductive column head is hollow, and one end of the conductive column head close to the face of a user is in an arc shape; the conductive column head material is conductive silica gel.
6. An electronic acupuncture device, comprising:
the acupuncture eye patch is obtained by three-dimensionally scanning the face of a user to obtain a face model of the user, carrying out image recognition on the face model of the user through the preset position relation of a plurality of eye acupuncture points on the face of the user to obtain the position information of the plurality of eye acupuncture points corresponding to the face model, generating a three-dimensional eye patch model matched with the face model through the face model of the user and the position information of the plurality of eye acupuncture points corresponding to the face model, and printing the three-dimensional eye patch model matched with the face model through a three-dimensional printing device; and a plurality of mounting holes are respectively provided on the acupuncture eye mask at positions corresponding to a plurality of eye acupoints of the user;
a conductive column head disposed in the mounting hole and connected with an electronic acupuncture controller in an electronic acupuncture device in a wireless or wired manner, so as to transmit an electronic pulse generated by the electronic acupuncture device to the conductive column head;
an electro-acupuncture controller for generating an electronic pulse to cause the conductive studs to conduct electro-acupuncture on the user;
the method for determining the position information of the eye acupuncture points on the face model according to the position relationship of the preset eye acupuncture points on the face of the person specifically comprises the following steps:
determining an eyebrow image of the user by adopting a coarse positioning mode for the face model; wherein the eyebrow image is an image containing eyebrows and eye areas of the user;
carrying out gray level processing on the eyebrow and eye images; constructing a transverse operator according to the size of the eyebrow image subjected to the gray processing, wherein the transverse operator is an odd number;
convolving the transverse operator with the eyebrow image subjected to the gray processing to obtain a transverse gray variation curve of the eyebrow image; taking the maximum value of the transverse gray scale change curve of the eyebrow eye image as the transverse central position of the eye area; at the transverse center position of the eye area, taking two positions which are respectively upward and downward along the longitudinal direction until a preset proportion is reached as an upper boundary and a lower boundary of the eye area; intercepting the eyebrow image according to the upper boundary and the lower boundary of the eye area to obtain a transverse position image of the eye area;
calculating a longitudinal gray scale integration function between the upper boundary and the lower boundary for each pixel in the left half image or the right half image of the transverse position image to obtain a longitudinal gray scale integration function image; in all the wave crests and wave troughs of the longitudinal gray scale integral function image, taking the positions corresponding to the wave crests or wave troughs on the leftmost side and the rightmost side in the transverse position image as the left boundary and the right boundary of the eye area in the longitudinal direction;
intercepting the transverse position image according to the left boundary and the right boundary of the eye area, and determining the eye area on the face model;
according to the determined eye regions and the corresponding position relations of the eye acupuncture points on the human face and the eye regions, determining the position information of the corresponding eye acupuncture points on the face model;
after the face model is roughly positioned and the eyebrow image of the user is determined, the method further comprises the following steps:
according to
Figure FDA0003894879870000091
And
Figure FDA0003894879870000092
when S is<When 7, judging the eyebrow and eye images to be fuzzy images, wherein g x (i, j) and g y (i, j) are gradient diagrams of the eyebrow eye image f in the x and y directions respectively, m, n are the number of rows and columns of the eyebrow eye image f in the x and y directions respectively, G num Is the sum of the number of non-zero gradient values of the x-direction gradient map and the y-direction gradient map;
according to
Figure FDA0003894879870000093
Figure FDA0003894879870000094
And
Figure FDA0003894879870000095
determining a foreground blurred image in the blurred image, wherein q (x, y) is the foreground blurred image, c is a third preset value, d is a fourth preset value, and N is h Is the total number of pixels in the neighborhood of the pixel with (x, y) in the blurred image, h (x, y) is the set of pixel points in the neighborhood of the pixel with (x, y) in the blurred image, I (s, t) is the gray value of the pixel with (x, y) in the blurred image, and m (x, y) is the mean value of I (x, y);
processing the determined foreground blurred image by adopting Gaussian filtering to obtain a foreground clear image which is used as an eyebrow image subjected to image deblurring;
an observation hole is formed in the three-dimensional eyeshade model at a position corresponding to the eyes of the user;
the electronic acupuncture device further includes:
the display device is connected with the electronic acupuncture controller, and the position of the display device corresponds to that of the observation hole;
the eye movement tracking device is connected with the electronic acupuncture controller and used for acquiring an eye image when the user watches a preset picture when the display device displays the preset picture, and sending the eye image to the electronic acupuncture controller, so that the electronic acupuncture controller obtains a watching track of the user on the display device according to the eye image, and determines whether the user is a vision-impaired user based on the preset picture and the watching track;
the display device is also used for receiving a playing instruction sent by the electronic acupuncture controller when the electronic acupuncture controller determines that the vision disorder type of the user is amblyopia, and playing a multimedia file according to the playing instruction; the picture corresponding to the multimedia file comprises a lower layer pattern and an upper layer bar; the upper layer bar grating is a bar grating pattern which is alternate in black and white and rotates at a preset speed;
the electronic acupuncture device further comprises:
the first shell is of a hollow structure with one open end, and the outer side of the other end of the first shell is connected with the acupuncture eyepatch and used for fixing the acupuncture eyepatch;
the second shell is of a hollow structure with two open ends, wherein one end of the second shell is connected with the open end of the first shell;
the third shell is of a hollow structure with one open end and the other closed end, wherein the open end of the third shell is connected with the other end of the second shell;
the conductive column head is hollow, and one end of the conductive column head close to the face of a user is in an arc shape; the conductive column cap is made of conductive silica gel.
CN201910704753.3A 2019-07-31 2019-07-31 Personalized electronic acupuncture device and generation method and generation device thereof Active CN110585592B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910704753.3A CN110585592B (en) 2019-07-31 2019-07-31 Personalized electronic acupuncture device and generation method and generation device thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910704753.3A CN110585592B (en) 2019-07-31 2019-07-31 Personalized electronic acupuncture device and generation method and generation device thereof

Publications (2)

Publication Number Publication Date
CN110585592A CN110585592A (en) 2019-12-20
CN110585592B true CN110585592B (en) 2022-11-22

Family

ID=68853238

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910704753.3A Active CN110585592B (en) 2019-07-31 2019-07-31 Personalized electronic acupuncture device and generation method and generation device thereof

Country Status (1)

Country Link
CN (1) CN110585592B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111529194B (en) * 2020-05-29 2022-05-03 山东第一医科大学附属眼科医院(山东省眼科医院) Pressurizing belt for eyes
CN112288855A (en) * 2020-10-29 2021-01-29 张也弛 Method and device for establishing eye gaze model of operator
CN112581518A (en) * 2020-12-25 2021-03-30 百果园技术(新加坡)有限公司 Eyeball registration method, device, server and medium based on three-dimensional cartoon model
CN113975152B (en) * 2021-11-01 2023-10-31 潍坊信行中直医疗科技有限公司 Individualized skin penetrating and supporting positioning device based on 3D printing and manufacturing method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5878156A (en) * 1995-07-28 1999-03-02 Mitsubishi Denki Kabushiki Kaisha Detection of the open/closed state of eyes based on analysis of relation between eye and eyebrow images in input face images
CN104941063A (en) * 2014-08-26 2015-09-30 毕宏生 Head-mounted adjustable acupuncture instrument for myopia treatment
CN106511067A (en) * 2016-12-23 2017-03-22 广东工业大学 Knee acupoint template and preparation method thereof
CN108478399A (en) * 2018-02-01 2018-09-04 上海青研科技有限公司 A kind of amblyopia training instrument

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5878156A (en) * 1995-07-28 1999-03-02 Mitsubishi Denki Kabushiki Kaisha Detection of the open/closed state of eyes based on analysis of relation between eye and eyebrow images in input face images
CN104941063A (en) * 2014-08-26 2015-09-30 毕宏生 Head-mounted adjustable acupuncture instrument for myopia treatment
CN106511067A (en) * 2016-12-23 2017-03-22 广东工业大学 Knee acupoint template and preparation method thereof
CN108478399A (en) * 2018-02-01 2018-09-04 上海青研科技有限公司 A kind of amblyopia training instrument

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
人眼分割及瞳孔定位研究;陈默涵;《中国优秀硕士学位论文全文数据库信息科技辑》;20180615;第I138-1464页 *
视频图像序列内的视线跟踪研究;于琼;《中国优秀硕士学位论文全文数据库信息科技辑》;20110915;第I138-1224页 *
运动模糊图像的去模糊算法研究;周菲;《《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》》;20180415;第I138-2384页 *

Also Published As

Publication number Publication date
CN110585592A (en) 2019-12-20

Similar Documents

Publication Publication Date Title
CN110585592B (en) Personalized electronic acupuncture device and generation method and generation device thereof
US11733542B2 (en) Light field processor system
CN108427503B (en) Human eye tracking method and human eye tracking device
CN104603673B (en) Head-mounted system and the method for being calculated using head-mounted system and rendering digital image stream
Chen et al. Simulating prosthetic vision: I. Visual models of phosphenes
JP2023022142A (en) Screening apparatus and method
CN109875863B (en) Head-mounted VR eyesight improving system based on binocular vision and mental image training
van Rheede et al. Simulating prosthetic vision: Optimizing the information content of a limited visual display
Fornos et al. Simulation of artificial vision, III: do the spatial or temporal characteristics of stimulus pixelization really matter?
Otero-Millan et al. Knowing what the brain is seeing in three dimensions: A novel, noninvasive, sensitive, accurate, and low-noise technique for measuring ocular torsion
CN107260506B (en) 3D vision training system, intelligent terminal and head-mounted device based on eye movement
CN107307981B (en) Control method of head-mounted display device
CN107028738B (en) Vision-training system, intelligent terminal and helmet based on eye movement
CN107291233B (en) Wear visual optimization system, intelligent terminal and head-mounted device of 3D display device
CN110850596B (en) Two-side eye vision function adjusting device and virtual reality head-mounted display equipment
CN107137211A (en) The 3D vision training methods moved based on eye
Qiu et al. Motion parallax improves object recognition in the presence of clutter in simulated prosthetic vision
Chauvin et al. Natural scene perception: visual attractors and images processing
Lu et al. Optimizing Chinese character displays improves recognition and reading performance of simulated irregular phosphene maps
CN110585591B (en) Brain vision detection and analysis equipment and method based on nerve feedback
CN114967128A (en) Sight tracking system and method applied to VR glasses
CN110882139B (en) Visual function adjusting method and device by using graph sequence
CN110812146B (en) Multi-region visual function adjusting method and device and virtual reality head-mounted display equipment
CN111580645B (en) Peripheral visual field calibration stimulation-induced electroencephalogram decoding-based sight tracking method
Boyle Improving perception from electronic visual prostheses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220114

Address after: Room 914, building 3, Minghu Plaza, Tianqiao District, Jinan City, Shandong Province

Applicant after: Jinan Tongxing Intelligent Technology Co.,Ltd.

Address before: 250014 No. 48, xiongshan Road, Shizhong District, Jinan City, Shandong Province

Applicant before: Bi Hongsheng

GR01 Patent grant
GR01 Patent grant