CN114299576A - Oral cavity cleaning area identification method, tooth brushing information input system and related device - Google Patents
Oral cavity cleaning area identification method, tooth brushing information input system and related device Download PDFInfo
- Publication number
- CN114299576A CN114299576A CN202111602391.0A CN202111602391A CN114299576A CN 114299576 A CN114299576 A CN 114299576A CN 202111602391 A CN202111602391 A CN 202111602391A CN 114299576 A CN114299576 A CN 114299576A
- Authority
- CN
- China
- Prior art keywords
- information
- oral cleaning
- oral
- action information
- cleaning area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000004140 cleaning Methods 0.000 title claims abstract description 814
- 210000000214 mouth Anatomy 0.000 title claims abstract description 215
- 238000000034 method Methods 0.000 title claims abstract description 128
- 230000001680 brushing effect Effects 0.000 title claims abstract description 32
- 230000009471 action Effects 0.000 claims abstract description 449
- 230000001815 facial effect Effects 0.000 claims abstract description 82
- 230000033001 locomotion Effects 0.000 claims description 94
- 230000008569 process Effects 0.000 claims description 65
- 238000004590 computer program Methods 0.000 claims description 16
- 230000001960 triggered effect Effects 0.000 claims description 5
- 238000012216 screening Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 13
- 230000001133 acceleration Effects 0.000 description 10
- 210000004373 mandible Anatomy 0.000 description 6
- 210000002050 maxilla Anatomy 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 210000000056 organ Anatomy 0.000 description 1
Images
Landscapes
- Brushes (AREA)
Abstract
The embodiment of the application discloses an oral cavity cleaning area identification method, a tooth brushing information input system and a related device, wherein the method is applied to an oral cavity cleaning device, the oral cavity cleaning device comprises a camera module and a sensor module, and the method comprises the following steps: acquiring a first facial image of a user through a camera module, acquiring first posture information of the oral cleaning device through a posture sensor module, and generating first action information according to the first facial image and the first posture information; and respectively matching the first action information with the standard action information corresponding to each oral cavity cleaning area, and determining a target oral cavity cleaning area corresponding to the first action information. This scheme of adoption can improve the accuracy of the clean regional discernment in oral cavity.
Description
Technical Field
The application relates to the technical field of intelligent oral cleaning devices, in particular to an oral cleaning area identification method, a tooth brushing information input system and a related device.
Background
Along with the intelligent development of electric product, electric oral cleaning device is owing to the rationality of its convenience and design, and more people adopt electric oral cleaning device to carry out daily oral cavity cleanness, and in the oral cavity cleaning process, different oral cavity cleaning region need adopt different oral cavity cleaning action in order to guarantee the clean effect of oral cavity. However, the existing electric oral cleaning device cannot accurately identify the oral cleaning area.
Disclosure of Invention
The embodiment of the application discloses an oral cavity cleaning area identification method, a tooth brushing information input system and a related device, which can accurately identify different oral cavity cleaning areas.
The embodiment of the application discloses a first aspect of an oral cavity cleaning area identification method, which is applied to an oral cavity cleaning device, wherein the oral cavity cleaning device comprises a camera module and an attitude sensor module, and the method comprises the following steps:
acquiring a first face image of a user through the camera module, acquiring first posture information of the oral cleaning device through the posture sensor module, and generating first action information according to the first face image and the first posture information;
and respectively matching the first action information with the standard action information corresponding to each oral cavity cleaning area, and determining a target oral cavity cleaning area corresponding to the first action information.
As an optional implementation manner, in the first aspect of this embodiment, the matching the first motion information with the standard motion information corresponding to each brushing oral cleaning area, and the determining the target oral cleaning area corresponding to the first motion information includes:
matching the first action information with standard action information corresponding to each oral cavity cleaning area respectively to obtain a matching result corresponding to each oral cavity cleaning area;
and screening out a target matching result from the matching results corresponding to the oral cavity cleaning areas, and determining the oral cavity cleaning area corresponding to the target matching result as the target oral cavity cleaning area.
Before the matching the first action information with the standard action information corresponding to each oral cleaning area respectively and determining the target oral cleaning area corresponding to the first action information, the method further includes:
if the oral cleaning device switches oral cleaning areas, determining first moving track information according to the track image acquired by the camera module, and determining second moving track information according to the first track posture information acquired by the sensor module;
the step of matching the first action information with the standard action information corresponding to each oral cavity cleaning area to determine a target oral cavity cleaning area corresponding to the first action information includes:
and respectively matching the first action information with standard action information corresponding to each oral cavity cleaning area, matching the first movement track information and the second movement track information with standard track information corresponding to each oral cavity cleaning area, and determining a target oral cavity cleaning area corresponding to the first action information, the first movement track information and the second movement track information.
As an optional implementation manner, in the first aspect of this embodiment, the matching the first motion information with the standard motion information corresponding to each oral cleaning region, and the determining the target oral cleaning region corresponding to the first motion information includes:
matching the first action information with standard action information corresponding to each oral cleaning area stored in the oral cleaning device, and determining a target oral cleaning area corresponding to the first action information; or,
and transmitting the first action information to an external device, so that the external device matches the first action information with standard action information corresponding to each oral cleaning area stored in the external device, and determines a target oral cleaning area corresponding to the first action information.
As an optional implementation manner, in the first aspect of this embodiment, after the determining the target oral cleaning region corresponding to the first action information, the method further includes:
calculating a deviation value between the first action information and standard action information corresponding to the target oral cleaning area;
if the deviation value is larger than a preset deviation threshold value, determining that the first action information does not accord with an action standard;
and if the deviation value is not greater than the preset deviation threshold value, determining that the first action information meets an action standard.
As an optional implementation manner, in the first aspect of this embodiment, the deviation value includes a first deviation value and a second deviation value; the preset deviation threshold comprises a first preset deviation threshold and a second preset deviation threshold;
the calculating a deviation value between the first action information and standard action information corresponding to the target oral cleaning area comprises:
calculating a deviation value between the first posture information and standard posture information corresponding to the oral cleaning area to obtain a first deviation value;
calculating a deviation value between the first facial image and a standard facial image corresponding to the oral cleaning area to obtain a second deviation value;
if the deviation value is greater than a preset deviation threshold value, determining that the first action information does not meet a standard action, including:
and if the first deviation value is greater than the first preset deviation threshold value and/or the second deviation value is greater than the second preset deviation threshold value, determining that the first action information does not meet the action standard.
As an optional implementation manner, in the first aspect of this embodiment, after determining that the first action information does not meet an action criterion if the deviation value is greater than a preset deviation threshold, the method further includes:
generating first prompt information;
outputting the first prompt information, wherein the first prompt information is used for prompting that the first action information does not conform to a standard action;
and playing action guidance content corresponding to the standard action information of the target oral cavity cleaning area, wherein the action guidance content is used for correcting the oral cavity cleaning action acted on the target oral cavity cleaning area.
As an optional implementation manner, in the first aspect of this embodiment, the method further includes:
if the fact that the oral cleaning operation of each oral cleaning area is finished is detected, acquiring a deviation value between first action information acquired in each oral cleaning area and standard action information corresponding to each oral cleaning area;
and determining the score of the oral cleaning process according to the deviation values corresponding to the oral cleaning areas.
As an alternative implementation, in the first aspect of this embodiment, the determining the score of the oral cleaning process according to each deviation value includes:
according to a deviation value corresponding to a first oral cavity cleaning area, determining a target deviation value interval in which the deviation value falls from a plurality of deviation value intervals corresponding to the first oral cavity cleaning area, and determining a score corresponding to the target deviation value interval as the score of the first oral cavity cleaning area; wherein the first oral cleaning region is any one of a plurality of regions, and the different deviation value intervals correspond to different scores respectively;
determining a score for the oral cleaning process as a sum of the scores for each of the oral cleaning regions.
As an optional implementation manner, in the first aspect of this embodiment, after the determining the target oral cleaning region corresponding to the first action information, the method further includes:
recording the duration of the oral cleaning device in the target oral cleaning area to obtain the area oral cleaning time;
and when the oral cleaning time of the region is greater than a preset time threshold, generating second prompt information and outputting the second prompt information, wherein the second prompt information is used for prompting that the oral cleaning device is located in the target oral cleaning region too long.
As an optional implementation manner, in the first aspect of this embodiment, after the determining the target oral cleaning region corresponding to the first action information, the method further includes:
if the target oral cleaning area is detected to finish the oral cleaning operation, generating third prompt information and outputting the third prompt information, wherein the third prompt information is used for prompting the oral cleaning device to move from the target oral cleaning area to the next oral cleaning area;
and if the oral cleaning operation of each oral cleaning area is detected to be completed, controlling the oral cleaning device to stop working.
The second aspect of the embodiment of the application discloses an oral cavity cleaning area identification method, which is applied to external equipment, and the method comprises the following steps:
receiving first action information transmitted by an oral cleaning device, wherein the first action information comprises a first facial image and first posture information acquired by the oral cleaning device;
matching the first action information with standard action information corresponding to each oral cavity cleaning area respectively to obtain a matching result;
transmitting the matching result to the oral cleaning device.
The third aspect of the embodiment of the application discloses a tooth brushing information entry system, which comprises an oral cleaning device and external equipment, wherein the oral cleaning device comprises a camera module and an attitude sensor module;
the external equipment is used for outputting guide information corresponding to an area to be input, the guide information is used for instructing a user to place a brush head of the oral cleaning device in a current oral cleaning area, and the current oral cleaning area is one of a plurality of divided oral cleaning areas;
the oral cleaning device is used for acquiring a current facial image of a user through the camera module and acquiring current posture information of the oral cleaning device through the posture sensor module; if the current oral cavity cleaning area is determined to be the area to be recorded according to the current facial image and the current posture information, generating a command to be recorded, and outputting the command to be recorded; if a determination instruction corresponding to the instruction to be input is received, responding to the determination instruction, acquiring a second facial image of the user through the camera module, acquiring second posture information of the oral cleaning device through the sensor module, and generating input action information according to the second facial image and the second posture information; and determining standard action information corresponding to the current oral cleaning area according to the input action information.
As an optional implementation manner, in the third aspect of this embodiment, the oral cleaning device is further configured to end the brushing information entry if each oral cleaning region has determined corresponding standard motion information.
As an optional implementation manner, in the third aspect of this embodiment, the oral cleaning device is further configured to generate the determination instruction if an entry operation triggered by an entry key on the oral cleaning device is detected; or continuously acquiring the posture information of the oral cleaning device through the sensor module, and if the posture information acquired within a preset time period is kept unchanged, generating the determination instruction.
As an optional implementation manner, in the third aspect of this embodiment, the oral cleaning device is further configured to obtain a plurality of pieces of entered motion information obtained in the current oral cleaning area; calculating a mean value of a plurality of pieces of face information, obtaining mean face information, and calculating a mean value of second posture information in the plurality of pieces of entered motion information, obtaining mean posture information, wherein the plurality of pieces of face information are obtained from a plurality of second face images; obtaining average input action information according to the average face information and the average posture information, wherein the average input action information comprises the average face information and the average posture information; and determining the average recorded action information as standard action information corresponding to the current oral cleaning area.
As an optional implementation manner, in the third aspect of this embodiment, the oral cleaning device is further configured to obtain a plurality of pieces of entered motion information obtained in the current oral cleaning area; determining a second face image range according to the maximum value and the minimum value of a plurality of pieces of second face information, wherein the plurality of pieces of second face information are obtained according to the second face images in the plurality of pieces of recorded action information; determining a second attitude information range according to the maximum value and the minimum value of second attitude information in the plurality of input action information; obtaining an input action information range according to the second face image range and the second posture information range, wherein the input action information comprises the second face image range and the second posture information range; and determining the input action information range as the standard action information corresponding to the current oral cleaning area.
As an optional implementation manner, in the third aspect of this embodiment, the oral cleaning device is further configured to determine, if the oral cleaning device performs oral cleaning area switching, the switched oral cleaning area as a new current oral cleaning area; acquiring third movement track information of a user through the camera module and acquiring fourth movement track information of the user through the sensor module to obtain input track information, wherein the input track information comprises the third movement track information and the fourth movement track information; and determining standard action information corresponding to the current oral cleaning area according to the input action information, and determining standard track information corresponding to the current oral cleaning area according to the input track information.
As an optional implementation manner, in the third aspect of this embodiment, the external device is further configured to obtain guidance action information corresponding to each oral cleaning region; and sequentially outputting guide action information corresponding to each oral cleaning area according to a preset arrangement sequence of each oral cleaning area, wherein the guide action information is used for indicating standard action information input into the corresponding oral cleaning area according to the guide action information.
As an optional implementation manner, in the third aspect of this embodiment, the oral cleaning apparatus is further configured to send the second face image and the second posture information to the external device;
the external equipment is also used for receiving second face information and second posture information sent by the oral cleaning device and generating input action information according to the second face image and the second posture information; and determining standard action information corresponding to the current oral cleaning area according to the input action information.
The fourth aspect of the embodiments of the present application discloses an oral cleaning area identification device, the device includes:
the data acquisition module is used for acquiring a first face image of a user through the camera module, acquiring first posture information of the oral cleaning device through the posture sensor module, and generating first action information according to the first face image and the first posture information;
and the area determining module is used for matching the first action information with the standard action information corresponding to each oral cavity cleaning area respectively and determining a target oral cavity cleaning area corresponding to the first action information.
In a fifth aspect of the embodiments of the present application, an oral cleaning device is disclosed, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to implement the method for identifying an oral cleaning area disclosed in the first aspect of the embodiments of the present application.
A sixth aspect of the embodiments of the present application discloses an external device, which includes a memory and a processor, where the memory stores a computer program, and when the computer program is executed by the processor, the processor is enabled to implement the oral cleaning area identification method disclosed in the second aspect of the embodiments of the present application.
A seventh aspect of the embodiments of the present application discloses a computer-readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements an oral cleaning area identification method as disclosed in the first or second aspect of the embodiments of the present application.
Compared with the related art, the embodiment of the application has the following beneficial effects:
the oral cleaning device comprises a camera module and a sensor module, a first facial image of a user is collected through the camera module, first posture information of the oral cleaning device is collected through the sensor module, first action information is obtained, the first action information is matched with standard action information corresponding to each oral cleaning area respectively, a matching result is obtained, and a target oral cleaning area is determined according to the matching result. In the process of brushing teeth, when the brushing teeth are performed in different brushing teeth areas, facial images which can be collected by a camera module fixed on the toothbrush are different; in the process of brushing teeth, the posture information of the toothbrush is different when the toothbrush realizes the operation of brushing teeth in different brushing areas. Therefore, in the embodiment of the application, the target tooth brushing area is determined by combining the facial image and the posture information, so that the situation that the target tooth brushing area is identified wrongly due to the fact that the facial image is not distinguished obviously when the oral cleaning area where the toothbrush is located is determined only according to the facial image is avoided, the situation that the target tooth brushing area is identified difficultly due to the fact that the posture information is large in deviation when the oral cleaning area where the toothbrush is located is determined only according to the posture information is also avoided, and the accuracy of the identification of the oral cleaning area is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1A is a schematic diagram of an application environment of an oral cleaning area identification method and a tooth brushing information entry method disclosed in an embodiment of the present application;
FIG. 1B is a schematic illustration of a divided oral cleaning area according to one embodiment of the disclosure;
FIG. 2 is a schematic flow chart illustrating a method for identifying an oral cleaning area according to an embodiment of the present disclosure;
FIG. 3 is a flowchart illustrating a first action information determination according to an embodiment;
fig. 4 is a schematic flow chart of another oral cleaning area identification method disclosed in the embodiments of the present application;
FIG. 5 is a system architecture diagram of a tooth brushing information entry system, according to an embodiment disclosed herein;
fig. 6 is a schematic structural diagram of an oral cleaning area recognition device disclosed in an embodiment of the present application;
fig. 7 is a schematic structural diagram of another oral cleaning region identification device disclosed in the embodiments of the present application;
fig. 8 is a schematic structural diagram of another oral cleaning region identification device disclosed in the embodiments of the present application;
fig. 9 is a schematic structural diagram of an oral cleaning device according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
The embodiment of the application discloses an oral cavity cleaning area identification method and device, an oral cavity cleaning device and a storage medium, which can realize automatic switching of an oral cavity cleaning device head, so that the replacement of the oral cavity cleaning device head is more convenient. The following detailed description is made with reference to the accompanying drawings.
Referring to fig. 1A, fig. 1A is a schematic view of an application environment of an oral cleaning area identification method according to an embodiment of the present application. As shown in fig. 1A, an oral cleaning device 10 and an external apparatus 20 may be included. Oral cleaning device 10 and external device 20 can be in data communication. The oral cleaning device 10 includes a camera module 100 and a posture sensor module 110. The camera module 100 may include at least one camera. The camera module 100 can be fixed on the oral cleaning device 10 or movably connected with the oral cleaning device 10. The camera module 100 needs to be able to photograph various parts on the face of the user. The attitude sensor module 110 includes at least a multi-axis sensor and an angular velocity sensor, and is configured to acquire attitude data such as velocity, acceleration, and angular velocity of the oral cleaning device 10. The multi-axis sensor may be a six-axis sensor, or may be a plurality of sensors formed by combining a plurality of three-axis sensors, such as a nine-axis sensor.
In an embodiment of the present application, the oral cleaning device 10 may further include a power module, a data module, a timer, a voice module, a switch module, a memory, a data transmission module, a filter module, and a driving module. Wherein, voice module can be bee calling organ, and data transmission module can contain bluetooth module and/or wiFi module, and drive module can be the motor.
The oral cleaning device 10 can establish a communication connection with the external device 20, and the oral cleaning device 10 collects a first facial image of the user through the camera module 100 and first posture information of the oral cleaning device through the posture sensor module 110. The oral cleaning device can match the first facial image and the first posture information with the standard action information of each oral cleaning area, and determine a target oral cleaning area according to the obtained matching result; the oral cleaning apparatus may also transmit the first facial image and the first posture information to the external device 20, the external device 20 matches the first facial image and the first posture information with the standard motion information of each oral cleaning region, and transmits the obtained matching result to the oral cleaning apparatus 10, and the oral cleaning apparatus 10 determines the target oral cleaning region according to the matching result.
Referring to fig. 1B, fig. 1B is a schematic diagram of divided oral cleaning areas according to an embodiment of the disclosure. In fig. 1B, the oral cleaning device in the oral cavity is divided into 16 oral cleaning regions according to 16 region positions commonly used in the target medical industry. Specifically, 16 parts of "maxillary anterior buccal surface", "maxillary anterior lingual surface", "maxillary left buccal surface", "maxillary left lingual surface", "maxillary left occlusal surface", "maxillary right buccal surface", "maxillary right lingual surface", "maxillary right occlusal surface", "mandibular anterior buccal surface", "mandibular anterior lingual surface", "mandibular left buccal surface", "mandibular left lingual surface", "mandibular left occlusal surface", "mandibular right buccal surface", "mandibular right lingual surface" and "mandibular occlusal surface" are included. It should be noted that the oral cleaning region in the oral cavity may be divided in other ways, and fig. 1B shows only one embodiment of the division and is not intended to limit the oral cleaning region.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a method for identifying an oral cleaning region according to an embodiment of the present application, which can be applied to the oral cleaning device 10. As shown in fig. 2, the method may include the steps of:
210. the method comprises the steps of collecting a first facial image of a user through a camera module, collecting first posture information of the oral cleaning device through a posture sensor module, and generating first action information according to the first facial information and the first posture information.
In this application embodiment, when the user adopts oral cavity cleaning device to carry out oral cavity cleanness, oral cavity cleaning device passes through camera module and gathers user's facial image in the oral cavity cleaning process in real time, and the facial image who gathers constitutes first facial image. In addition, the oral cleaning device also collects the posture information of the oral cleaning device in the oral cleaning process in real time through the posture sensor module, and the collected posture information forms first posture information, wherein the posture information can comprise the moving speed and the moving direction of the oral cleaning device, the acceleration and the direction of the oral cleaning device, the angular speed and the direction of the oral cleaning device and the like. The oral cleaning device generates first motion information based on the acquired first facial image and the first posture information. The first motion information may be a set including a first face image and first posture information, for example, the first face image is a1, the first posture information is B1, and the first posture information includes a velocity, an acceleration, and an angular velocity, where the velocity and the acceleration indicate a direction from the maxilla to the mandible in fig. 2 when the velocity and the acceleration are negative values, and indicate a direction from the mandible to the maxilla in fig. 2 when the velocity and the acceleration are positive values. Therefore, B1 is (3, -1,5), and the first motion information D1 is (a1, B1).
The first motion information may also be a data set formed by extracting feature value data in the first face image and partial data in the first pose information. For example, data of the nose position in the first face image and angular velocity data in the first posture information are extracted to generate first motion information.
220. And respectively matching the first action information with the standard action information corresponding to each oral cavity cleaning area, and determining a target oral cavity cleaning area corresponding to the first action information.
In the embodiment of the present application, the oral cleaning device matches the first action information with the standard action information corresponding to each oral cleaning region, respectively, after obtaining the first action information, and determines the target oral cleaning region corresponding to the first action information according to the matching condition of the first action information and each standard action information. The standard action information refers to information corresponding to a user who uses a standard oral cleaning action in an oral cleaning area, and the standard action information can be obtained by performing information acquisition and entry in advance.
After the oral cleaning device determines the target oral cleaning area corresponding to the first action information, the camera module can be used for continuously acquiring the first facial image of the user, the attitude sensor module is used for continuously acquiring the first attitude information of the oral cleaning device so as to continuously obtain the first action information, and the oral cleaning area corresponding to each piece of first action information in the oral cleaning process is determined according to each piece of obtained first action information.
In one embodiment, the process of the oral cleaning device matching the first motion information with the standard motion information corresponding to each brushing oral cleaning region respectively in the step 220 to determine the target oral cleaning region corresponding to the first motion information may include the following steps:
and matching the first action information with the standard action information corresponding to each oral cavity cleaning area respectively to obtain a matching result corresponding to each oral cavity cleaning area.
And screening out a target matching result from the matching results corresponding to the oral cavity cleaning areas, and determining the oral cavity cleaning area corresponding to the target matching result as the target oral cavity cleaning area.
In the embodiment of the application, the oral cleaning device respectively matches the first action information with the standard action information corresponding to each oral cleaning area, and obtains a plurality of matching results equal to the number of the oral cleaning areas. The oral cleaning device screens out a target matching result from the obtained multiple matching results, and an oral cleaning area corresponding to the target matching result is used as a target oral cleaning area corresponding to the acquired first action information. The matching result may include matching degrees of the first action information and the plurality of standard action information, respectively, and the target matching result may be a matching result with a highest matching degree. At this time, the oral cleaning device selects an oral cleaning area corresponding to the standard action information corresponding to the highest matching degree in the matching result, and the oral cleaning area is the target oral cleaning area.
In the embodiment of the application, the target matching result is screened from the plurality of matching results, the determined target oral cleaning area can be the most consistent of the oral cleaning areas, and the reliability of the determined target oral cleaning area is improved.
In some embodiments, the process of the oral cleaning device performing the step 220 of matching the first action information with the standard action information corresponding to each oral cleaning region respectively to determine the target oral cleaning region corresponding to the first action information may include the following steps:
and matching the first action information with the standard action information corresponding to the current oral cleaning area according to a preset area sequence to obtain a matching result of the current oral cleaning area, wherein the current oral cleaning area is the first oral cleaning area in the area sequence.
And if the matching result of the current oral cavity area does not meet the matching requirement, re-determining the oral cavity cleaning area behind the current oral cavity cleaning area as the current oral cavity cleaning area according to the area sequence, and re-performing the step of matching the first action information with the standard action information corresponding to the current oral cavity cleaning area to obtain the matching result of the current oral cavity cleaning area.
And if the matching result of the current oral cavity area meets the matching requirement, determining the current oral cavity cleaning area as the target oral cavity cleaning area, and ending the matching process.
In this embodiment of the application, the oral cleaning device may sequentially match the first action information with the standard action information corresponding to each oral cleaning region one by one according to a preset oral cleaning region arrangement order to obtain a corresponding matching result, and the matching result may be represented as a matching degree form. And after the standard action information corresponding to each oral cleaning area is matched, comparing the matching result with the matching requirement, and judging whether the matching degree meets the threshold value of the matching degree. If the matching result is satisfactory, the oral cleaning device can directly determine the currently matched oral cleaning area as the target oral cleaning area. If the matching result does not meet the matching requirement, the oral cleaning device can determine the oral cleaning area which is one bit behind the current oral cleaning area in the sequence as a new current oral cleaning area according to the sequence. And repeating the process of matching and judging whether the matching requirement is met until the target oral cleaning area can be determined.
In the embodiment of the application, the target oral cavity cleaning area is determined by matching one by one and judging, so that the subsequent matching process can be stopped when the target oral cavity cleaning area meeting the matching requirement is determined, and the calculation amount of the matching process is reduced.
In one embodiment, the oral cleaning device may further perform the following steps before performing the process of determining the matching result according to the first motion information of step 220:
if the oral cleaning device switches the oral cleaning area, determining first moving track information according to the track image acquired by the camera module, and determining second moving track information according to the first track posture information acquired by the posture sensor module;
in the process of determining a matching result according to the first action information in step 220, where the matching result is obtained by matching the first action information with the standard action information corresponding to each oral cleaning area, the oral cleaning device includes:
and determining a matching result according to the first action information, the first movement track information and the second movement track information, wherein the matching result is obtained by respectively matching the first action information with the standard action information corresponding to each oral cavity cleaning area and matching the first movement track information and the second movement track information with the standard track information corresponding to each oral cavity cleaning area.
In this embodiment, when the oral cleaning device detects that the oral cleaning device is switched over in the area, the oral cleaning device may determine first movement track information according to a track image acquired by the camera module in real time when the oral cleaning device is switched over in the area, and determine second movement track information according to first track posture information acquired by the posture sensor module in real time when the oral cleaning device is switched over in the area, where the track image includes a plurality of facial images, and the first track posture information includes a plurality of posture information.
For example, the oral cleaning device collects the first facial image and the first posture information in real time, and when the first motion information composed of the first facial image and the first posture information collected in real time is not matched with the standard motion information in the target oral cleaning region successfully any more or has the highest matching degree, but is matched with the standard motion information in other oral cleaning regions successfully or has the highest matching degree, the first facial image and the first posture information collected at the last several times when the oral cleaning region is located in the target oral cleaning region, and the first facial image and the first posture information collected at several times after the oral cleaning region is switched to other oral cleaning regions can be used for determining the first movement track information and the second movement track information.
In the embodiment of the present application, each oral cleaning region may correspond to a plurality of standard trajectory information, which are standard trajectory information when switching from different oral cleaning regions to the oral cleaning region. Therefore, after the oral cleaning device acquires the first movement track information and the second movement track information, the first motion information can be matched with the standard motion information corresponding to each oral cleaning area, and the first movement track information and the second movement track information can be matched with the standard track information corresponding to each oral cleaning area, so that a matching result is obtained. Wherein the standard trajectory information may include first standard movement trajectory information and second standard movement trajectory information. Specifically, the oral cleaning device may determine one matching degree set by matching the first motion information with the standard motion information, and determine another matching degree set by matching the first movement trajectory information and the second movement trajectory information with the first standard movement trajectory information and the second standard movement trajectory information in the respective oral cleaning regions, respectively. The oral cleaning device can select the oral cleaning area corresponding to the maximum value in the two matching degree sets as the target oral cleaning area. The maximum value of the sum of the two matching degrees corresponding to each oral cleaning region may be screened, and the oral cleaning region corresponding to the maximum value may be set as the target oral cleaning region. And is not particularly limited herein.
The target oral cleaning area is determined by adding the moving track for matching, so that the accuracy of identifying the oral cleaning area can be further improved.
In one embodiment, matching the first action information with the standard action information corresponding to each oral cleaning area respectively, and determining the target oral cleaning area corresponding to the first action information comprises:
and matching the first action information with standard action information corresponding to each oral cavity cleaning area stored in the oral cavity cleaning device, and determining a target oral cavity cleaning area corresponding to the first action information. Or,
and transmitting the first action information to the external equipment so that the external equipment matches the first action information with standard action information corresponding to each oral cavity cleaning area stored in the external equipment, and determining a target oral cavity cleaning area corresponding to the first action information.
In the embodiment of the application, the process of matching the first action information with the standard action information corresponding to each oral cleaning area to obtain the matching result may be performed in the oral cleaning device, or may be performed on an external device, such as a terminal of a mobile phone, a tablet computer, or the like.
Specifically, if the matching process is performed in the oral cleaning device, after the oral cleaning device obtains the first action information, the oral cleaning device may directly match the action information with the standard action information corresponding to each oral cleaning area, so as to obtain a matching result, where the standard action information corresponding to each oral cleaning area may be stored in the oral cleaning device. Alternatively, the standard motion information may be stored in a server or other external device, and when the user activates the oral cleaning apparatus, the server or other external device may transmit the standard motion information corresponding to each oral cleaning region to the oral cleaning apparatus.
If the matching process is carried out in the external equipment, the oral cavity cleaning device transmits the first action information to the external equipment through a communication mode such as Bluetooth or WiFi after obtaining the first action information, the external equipment matches with the standard action information corresponding to each oral cavity cleaning area stored in the external equipment after receiving the first action information, and the matching result obtained after matching is transmitted back to the oral cavity cleaning device.
In the embodiment of the application, the efficiency of the matching process can be improved by executing the process of matching the first action information with the plurality of standard action information in the oral cleaning device, and the requirement on the calculation capability of the oral cleaning device can be reduced by executing the process of matching the first action information with the plurality of standard action information in the external equipment.
By adopting the embodiment, the target tooth brushing area is determined by combining the facial image and the posture information, the condition that the target tooth brushing area is identified wrongly because the facial image is not obviously distinguished when the oral cleaning area where the toothbrush is located is determined according to the facial image is avoided, the condition that the target tooth brushing area is identified difficultly because the posture information has larger deviation when the oral cleaning area where the toothbrush is located is determined according to the posture information is also avoided, and the accuracy of identifying the oral cleaning area is improved.
In an embodiment, please refer to fig. 3, and fig. 3 is a flowchart illustrating a first action information determination according to an embodiment. After the oral cleaning device performs the step 230 of determining the target oral cleaning area corresponding to the first action information, the following steps may be further performed:
310. calculating a deviation value between the first action information and standard action information corresponding to the target oral cleaning area;
320. if the deviation value is larger than a preset deviation threshold value, determining that the first action information does not accord with the action standard;
330. and if the deviation value is not greater than the preset deviation threshold value, determining that the first action information meets the action standard.
In the embodiment of the application, after the oral cleaning device determines the target oral cleaning area, the oral cleaning device calculates a deviation value between the acquired first action information and the standard action information corresponding to the target oral cleaning area. The standard motion information includes a standard face image and standard posture information, and the first motion information includes a first face image and first posture information. Therefore, for the calculation of the deviation value between the first motion information and the standard motion information, specifically, the similarity between the first face image in the first motion information and the standard face image in the standard motion information and the difference between the first posture information in the first motion information and the standard posture information in the standard motion information may be calculated, respectively, and the calculated similarity and the difference may be combined to obtain the deviation value between the first motion information and the standard motion information. For example, the similarity is in the interval of [ 80%, 90% ], and when the difference is in the interval of [10,20], the corresponding deviation value is 10, and the like, but not limited thereto.
At this time, the calculated deviation value may be compared with a preset deviation threshold value. If the deviation value is larger than a preset deviation threshold value, determining that the first action information does not accord with the action standard; and if the deviation value is not greater than the preset deviation threshold value, determining that the first action information meets the action standard. The action criterion is a request for an oral cleaning action performed by the user when the action information is input.
By calculating the deviation value between the first action information and the standard action information corresponding to the oral cleaning area, the deviation condition between the action corresponding to the first action information and the standard action can be visually embodied, and the accuracy of oral cleaning action judgment is improved.
In one embodiment, the offset value comprises a first offset value and a second offset value; the preset deviation threshold comprises a first preset deviation threshold and a second preset deviation threshold;
the process of the oral cleaning device in performing step 310 of calculating the deviation value between the first motion information and the standard motion information corresponding to the target oral cleaning region may include:
calculating a deviation value between the first posture information and standard posture information corresponding to the oral cleaning area to obtain a first deviation value;
and calculating a deviation value between the first facial image and the standard facial image corresponding to the oral cleaning area to obtain a second deviation value.
In addition, the step of determining that the first action information does not meet the standard action if the deviation value is greater than the preset deviation threshold in step 320 may include:
and if the first deviation value is larger than a first preset deviation threshold value and/or the second deviation value is larger than a second preset deviation threshold value, determining that the first action information does not meet the action standard.
In the embodiment of the present application, the deviation value between the first action information and the standard action information corresponding to the target oral cleaning device region may include two values, i.e., a first deviation value and a second deviation value. Correspondingly, the preset deviation threshold may also include two preset thresholds, namely a first preset deviation threshold and a second preset deviation threshold. At this time, for the process of calculating the deviation value between the first motion information and the standard motion information corresponding to the target oral cleaning device region, the deviation value between the first posture information and the standard posture information and the deviation value between the first face image and the standard face image corresponding to the oral cleaning region may be calculated to obtain the first deviation value and the second deviation value, respectively. Since the first posture information is collected by the posture sensor module, the first posture information can be embodied as specific data, that is, a deviation value between the data of the first posture information and the data of the standard posture information is calculated to obtain a first deviation value. The first deviation value may be a difference between the first posture information and the standard posture information, or a ratio between the difference between the first posture information and the standard posture information, and is not particularly limited.
For the calculation of the second deviation value, a face image coordinate system may be established first, then feature extraction may be performed on the first face image and the standard face image, the same extracted features are obtained, then coordinate values of the extracted features in the established face image coordinate system are obtained, and further a difference value between the coordinate values of each feature in the two images is calculated, so as to obtain the second deviation value. For example, the coordinates of the feature extracted in the first face image in the face image coordinate system are (10,20), and the coordinates of the feature extracted in the standard face image in the face image coordinate system are (20,20), then the difference between the two coordinate values is 10. In this case, the difference may be directly used as the second deviation value, or a ratio between the difference and the coordinate value of the feature extracted from the standard face image may be used as the second deviation value, which is not particularly limited. In addition, the order of calculating the first deviation value and the second deviation value is not particularly limited.
In an embodiment of the present application, after calculating the first deviation value and the second deviation value, the oral cleaning device may compare the first deviation value with a first preset deviation threshold value and compare the second deviation value with a second preset deviation threshold value. Determining that the first action information does not meet the action criterion as long as either one of the first deviation value or the second deviation value is greater than the corresponding preset deviation threshold. Otherwise, if the first deviation value is not greater than the first preset deviation threshold value and the second deviation value is not greater than the second preset deviation threshold value, it is determined that the first motion information meets the motion criterion.
The deviation value is composed of a first deviation value and a second deviation value, the first deviation value is obtained according to the first posture information, the second deviation value is obtained according to the first facial image, and whether the first action information meets the action standard or not is judged according to the comparison condition of the first deviation value and the second deviation value with the corresponding preset deviation threshold value, so that the accuracy of judging the oral cavity cleaning action can be further improved.
In one embodiment, after the oral cleaning apparatus performs the process of determining that the first action information does not meet the action criterion if the deviation value is greater than the preset deviation threshold in step 320, the following steps may be further performed:
generating first prompt information;
outputting first prompt information, wherein the first prompt information is used for prompting that the first action information does not conform to the standard action;
and playing action guide content corresponding to the standard action information of the target oral cavity cleaning area, wherein the action guide content is used for correcting the oral cavity cleaning action acted on the current target oral cavity cleaning area.
In the embodiment of the application, after determining that the first action information does not meet the action standard, the oral cleaning device generates and outputs the first prompt information to prompt the user that the first action information does not meet the standard action, that is, the current oral cleaning action is not standard. Wherein, oral cavity cleaning device accessible sets up the buzzer in oral cavity cleaning device and exports first prompt information, and then reminds the user.
After the user information is reminded, the oral cleaning device can also play action guide content corresponding to the standard action information of the target oral cleaning area, wherein the action guide content is the standard oral cleaning action content corresponding to the standard action information of the target oral cleaning area. For example, the motion guidance may be "start with the first tooth on the left, perform oral cleaning in an up-down manner, 10 seconds per oral cleaning device, and repeat the same operation for the right tooth after brushing". After the action guidance content is output, the user can correct the current oral cavity cleaning action according to the correct oral cavity cleaning action.
The current oral cavity cleaning action error of the user is prompted by outputting the first prompt information, and the current oral cavity cleaning action of the user is corrected by outputting the action guide content, so that the oral cavity cleaning of the user can be better assisted.
In one embodiment, the oral cleaning region identification method may further perform the steps of:
if the fact that the oral cavity cleaning operation is completed in each oral cavity cleaning area is detected, obtaining a deviation value between first action information collected in each oral cavity cleaning area and standard action information corresponding to each oral cavity cleaning area;
and determining the score of the oral cleaning process according to the corresponding deviation value of each oral cleaning area.
In the embodiment of the application, the oral cleaning device continuously collects the first posture information of the oral cleaning device and the first facial image of the user during the process of oral cleaning of the user until the user finishes the oral cleaning process. In the process of continuously acquiring the first posture information and the first facial image, the oral cleaning device acquires a matching result between first action information consisting of the first posture information and the first facial image and standard action information corresponding to 16 oral cleaning areas divided in fig. 1B in real time, so that a target oral cleaning area where the oral cleaning device is located in real time is determined, and the first action information acquired when the oral cleaning device is located in each oral cleaning area is recorded. When the user finishes the oral cavity cleaning process, whether each oral cavity cleaning area has corresponding first action information is detected, and if yes, the user is considered to finish the oral cavity cleaning action on each oral cavity cleaning area. At this time, the oral cleaning device calculates the deviation value between the first action information collected by each oral cleaning area and the standard action information corresponding to the oral cleaning area to obtain the deviation value corresponding to each oral cleaning area, and determines the score of the user in the oral cleaning process according to each deviation value. The oral cleaning process is scored after the oral cleaning process is finished, so that the tooth brushing process of a user can be evaluated visually, and the user is better assisted to improve the oral cleaning quality.
In one embodiment, the oral cleaning apparatus in performing the determining the score of the oral cleaning process from each deviation value may include:
according to the deviation value corresponding to the first oral cavity cleaning area, determining a target deviation value interval in which the deviation value falls from a plurality of deviation value intervals corresponding to the first oral cavity cleaning area, and determining a score corresponding to the target deviation value interval as the score of the first oral cavity cleaning area; wherein the first oral cleaning region is any one of the plurality of regions, and the different deviation value intervals correspond to different scores respectively.
The sum of the scores of the first motion information collected at each oral cleaning region is determined as a score of the oral cleaning process.
In this embodiment, after calculating the deviation value corresponding to each oral cleaning region, the oral cleaning device may determine the score of the first action information corresponding to each oral cleaning region according to a plurality of deviation value intervals corresponding to each oral cleaning region. The plurality of deviation value intervals corresponding to the oral cleaning regions may be the same or different. For example, the oral cleaning area a has three deviation value ranges of [0,10], [10,20] and [20,30], and the corresponding scores are 10, 5 and 1, respectively. The oral cleaning area B also has three deviation value intervals, which are [0,20], [20,40] and [40,60], and the corresponding scores are 10, 5 and 1 respectively. At this time, the deviation value of the oral cleaning area a is 9, and it is determined that the deviation value of the oral cleaning area a falls within [0,10] of the three deviation value intervals corresponding to the oral cleaning area a, so that the score of the first action information acquired by the oral cleaning area a is 10. The deviation value of the oral cavity cleaning area B is 18, the deviation value of the oral cavity cleaning area B is judged to fall into the [0,20] interval of the three deviation value intervals corresponding to the oral cavity cleaning area B, then the score of the first action information collected by the oral cavity cleaning area B is also 10, and the like. The deviation value interval data can be preset in the oral cavity cleaning device or the server, and if the deviation value interval data is preset in the server, the data can be transmitted to the oral cavity cleaning device according to a request sent by the oral cavity cleaning device.
The oral cleaning device calculates the sum of the scores of the acquired first motion information of all oral cleaning regions, and takes the score value as the score of the oral cleaning process. The oral cleaning device may also normalize the sum of the scores of the collected first motion information of all the oral cleaning regions to an interval of [0,100], and take the normalized value as the score of the oral cleaning process. Through scoring the oral cavity cleaning process, the oral cavity cleaning process of a user can be evaluated more intuitively, the deviation values corresponding to the oral cavity cleaning regions are matched with the deviation value intervals corresponding to the oral cavity cleaning regions, the difference of different oral cavity cleaning regions on deviation degree regulation is fully considered, and the scoring reliability is improved.
In one embodiment, after the oral cleaning apparatus performs the process of determining the target oral cleaning region corresponding to the first action information in step 230, the following steps may be further performed:
recording the duration of the oral cleaning device in the target oral cleaning area to obtain the cleaning time of the oral cavity in the area;
and when the cleaning time of the oral cavity in the region is greater than the preset time threshold, generating second prompt information and outputting the second prompt information, wherein the second prompt information is used for prompting that the oral cavity cleaning device is located in the target oral cavity cleaning region too long.
In the embodiment of the application, after the oral cleaning device determines the target oral cleaning area, the duration of the oral cleaning device in the target oral cleaning area is recorded, and the area oral cleaning time is obtained. Specifically, the oral cleaning device can continuously acquire the posture information and the face image of the oral cleaning device through the posture sensor module and the camera module, determine the oral cleaning area where the posture information and the face image are located in real time, continuously record the time that the oral cleaning device is located in the target oral cleaning area if the oral cleaning device is always located in the target oral cleaning area, and stop recording until the oral cleaning device is located in other oral cleaning areas except the target oral cleaning area, wherein the continuously recorded time is the area oral cleaning time. And in the process of continuously recording the time of the oral cavity cleaning device in the target oral cavity cleaning area, if the recorded time is detected to be greater than a preset time threshold, generating and outputting second prompt information. The second prompt message is a message prompting that the user performs the oral cavity cleaning in the target oral cavity cleaning area for too long time. The second prompt message can be output through a buzzer arranged in the oral cavity cleaning device, and can also be output in a voice playing mode. Through the duration that detects in target oral cavity clean area, and generate second prompt message in order to indicate the clean time overlength of user's current regional oral cavity, can better carry out the record to user's oral cavity clean data to can improve user's oral cavity clean experience.
In one embodiment, after the oral cleaning apparatus performs the process of determining the target oral cleaning region corresponding to the first action information in step 230, the following steps may be further performed:
if the target oral cavity cleaning area is detected to finish the oral cavity cleaning operation, generating third prompt information and outputting the third prompt information, wherein the third prompt information is used for prompting that the oral cavity cleaning device is moved from the target oral cavity cleaning area to the next oral cavity cleaning area;
and if the oral cleaning operation is detected to be completed in each oral cleaning area, controlling the oral cleaning device to stop working.
In the embodiment of the present application, the standard motion information corresponding to each oral cleaning region may include a plurality of standard posture information and a standard face image. When the first action information is matched with the standard action information, the first action information is matched with each standard action information in each oral cleaning area, and as long as the first action information is successfully matched with one standard action information, the target oral cleaning area is determined to be the oral cleaning area corresponding to the successfully matched standard action information. And when the oral cleaning device matches the standard action information in the target oral cleaning region according to the first action information acquired in real time, if the plurality of standard action information in the target oral cleaning region are successfully matched with the acquired first action information, the target oral cleaning region can be considered to have completed oral cleaning operation. At this point, the oral cleaning device may generate a third prompting message, which is a message prompting the user to move the oral cleaning device from the target oral cleaning area to another oral cleaning area. The oral cleaning device can output the third prompt message in a voice playing mode.
When it is detected that the oral cleaning operation is completed in each oral cleaning area, that is, each standard action information in each oral cleaning area is successfully matched with the collected first action information in the oral cleaning process, it can be considered that the oral cleaning operation is completed in each oral cleaning area. At this time, the oral cleaning device may stop working, and the specific oral cleaning device may control the motor provided in the oral cleaning device to stop rotating, which is not particularly limited. Through the suggestion user carries out the switching in oral cavity cleaning region when the clean operation of oral cavity is accomplished in an oral cavity cleaning region to and stop work when the clean operation of oral cavity is all accomplished in each oral cavity cleaning region, can further promote user's oral cavity cleaning experience, avoid an oral cavity cleaning region or whole oral cavity cleaning process to last too long and lead to the tooth impaired.
Referring to fig. 4, fig. 4 is a flowchart illustrating another oral cleaning area identification method disclosed in the embodiment of the present application, which can be applied to the external device 20. As shown in fig. 4, the method may include the steps of:
410. receiving first action information transmitted by the oral cleaning device, wherein the first action information comprises a first facial image and first posture information acquired by the oral cleaning device;
420. matching the first action information with the standard action information corresponding to each oral cavity cleaning area respectively to obtain a matching result;
430. transmitting the matching result to the oral cleaning device.
In the embodiment of the present application, the specific implementation steps of the oral cleaning region identification method applied to the external device are the same as those in the above embodiment, and are not described herein again.
In one embodiment, please refer to fig. 5, fig. 5 is a system architecture diagram of a tooth brushing information entry system according to an embodiment. The system includes an oral cleaning device 10 and an external apparatus 20, the oral cleaning device 10 including a camera module 100 and a posture sensor module 110. Specifically, the method comprises the following steps:
the external equipment is used for outputting guide information corresponding to the area to be input, the guide information is used for indicating a user to place the brush head of the oral cleaning device in the current oral cleaning area, and the current oral cleaning area is one of the plurality of divided oral cleaning areas.
The oral cleaning device is used for acquiring a current facial image of a user through the camera module and acquiring current posture information of the oral cleaning device through the posture sensor module; if the current oral cavity cleaning area is determined to be the area to be recorded according to the current facial image and the current posture information, generating a command to be recorded, and outputting the command to be recorded; if a determination instruction corresponding to the instruction to be input is received, responding to the determination instruction, acquiring a second facial image of the user through the camera module, acquiring second posture information of the oral cleaning device through the sensor module, and generating input action information according to the second facial image and the second posture information; and determining standard action information corresponding to the current oral cleaning area according to the input action information.
In this embodiment, before the user performs the actual oral cleaning operation, the standard action information corresponding to each oral cleaning area may be entered into the oral cleaning device. Specifically, the external device displays an oral cavity cleaning area to be entered with standard action information, and the area is an area to be entered. The user places the brush head of the oral cleaning device on an oral cleaning area according to the guide information displayed by the external device or played by voice. Wherein the instructional information may include an image of the area to be entered and an indication of standard action information to be entered into the area to be entered, the oral cleaning area to which the user places the brush head of the oral cleaning device being the current oral cleaning area.
At the moment, the oral cleaning device can acquire the current facial image of the user through the camera module and acquire the current posture information of the brush head through the posture sensor module, and determine whether the current oral cleaning area where the brush head is placed by the user is the area to be recorded indicated by the external equipment or not according to the current facial image and the current posture information. If so, the oral cleaning device generates a command to be entered and inputs the command to be entered through the voice part or the display part. After the user obtains the instruction to be entered output by the oral cleaning device, the oral cleaning device can be enabled to generate or receive a determined instruction corresponding to the instruction to be entered through active triggering, passive triggering and other modes. The oral cleaning device responds to the determination instruction, the camera module is controlled to collect second facial images corresponding to the oral cleaning regions of the user, the attitude sensor module is controlled to collect second attitude information corresponding to the oral cleaning devices in the oral cleaning regions, and entry action information corresponding to the oral cleaning regions is generated, wherein the specific mode for generating the entry action information can be the same as the specific mode for generating the first action information, and is not repeated here. The oral cleaning device can take the input action information as the standard action information of the corresponding oral cleaning area. The standard action information is determined by inputting the action information, and standard action information errors caused by different oral cavity forms, tooth forms, facial forms and the like of different users can be avoided, so that the accuracy of identifying the oral cavity cleaning area and the oral cavity cleaning action is influenced, errors caused by human body characteristics can be eliminated, and the accuracy of identifying the oral cavity cleaning area and the oral cavity cleaning action is improved. And before the collection, whether the oral cavity cleaning area where the brush head is located is the oral cavity cleaning area of indication is judged by the cooperation mode of the user and the oral cavity cleaning device, and the condition that the input information does not correspond to the oral cavity cleaning area is effectively avoided.
In one embodiment, the oral cleaning device is further configured to end the brushing information entry if each oral cleaning region has determined corresponding standard action information.
In the embodiment of the application, the judgment of the oral cleaning areas and the input process of the action information are repeated until the standard action information of each oral cleaning area is successfully input. The entry of the standard action information of each oral cleaning area can be realized at one time.
In one embodiment, the oral cleaning device is further configured to generate a determination instruction if an entry operation triggered by an entry key on the oral cleaning device is detected; or the attitude information of the oral cleaning device is continuously acquired through the sensor module, and if the acquired attitude information in the preset time period is kept unchanged, a determination instruction is generated.
In the embodiment of the application, a user can trigger an entry operation through an entry key on the oral cleaning device, and the oral cleaning device can generate a determination instruction when the entry operation triggered by the entry key is detected. The oral cleaning device can also continuously acquire the posture information of the oral cleaning device through the posture sensor module, and in a preset time period, if all the posture information is kept unchanged or all the posture information is in a preset range, the oral cleaning device can generate a determination instruction. For example, the preset time period includes 3 times, the posture information collected by the oral cleaning device at the 3 times is (1,1,1), (1.1, 1.2, 1.3) and (1.3, 1.2, 1.4), and the preset range is [1,1.5], so that the posture information is considered to remain unchanged in the preset time period, and the oral cleaning device generates the determination instruction. The posture information change condition of the oral cleaning device is triggered through a key or automatically detected to generate a determination instruction, so that the entry process of the oral cleaning device is more convenient.
In one embodiment, the oral cleaning device is further used for acquiring a plurality of pieces of recorded action information obtained in the current oral cleaning area; calculating the average value of a plurality of pieces of face information to obtain average face information, and calculating the average value of second posture information in a plurality of pieces of input action information to obtain average posture information, wherein the plurality of pieces of face information are obtained from a plurality of second face images; obtaining average input action information according to the average face information and the average posture information, wherein the average input action information comprises the average face information and the average posture information; and determining the average recorded action information as standard action information corresponding to the current oral cleaning area.
In this embodiment of the application, as for the process of determining the standard action information corresponding to each oral cleaning area according to the entry action information, specifically, the process of entering the action information may be repeatedly acquired after the oral cleaning device acquires the entry action information acquired in each oral cleaning area, so as to obtain a plurality of entry action information acquired in each oral cleaning area. And each piece of input action information comprises a second face image and second posture information. For an oral cleaning area, the oral cleaning device may establish a facial image coordinate system, extract the same features from a plurality of facial images collected from the oral cleaning area, determine coordinates of the extracted features of each facial image in the facial image, and form facial information corresponding to the facial image by the coordinates of the extracted features of each facial image in the facial image. A plurality of pieces of face information, that is, an average of a plurality of coordinates, is calculated to obtain average face information. Similarly, the oral cleaning device calculates a mean value of a plurality of second posture information collected in the oral cleaning region to obtain average posture information. The average facial information and the average posture information form average input action information corresponding to the oral cleaning area. And obtaining average input action information corresponding to other oral cleaning areas in the same way. The oral cleaning device takes the average input action information corresponding to each oral cleaning area as the standard action information corresponding to each oral cleaning area. By using the average entry action information composed of the average value of the face information and the average value of the second posture information as the standard action information, the entry action information can be subjected to noise reduction processing, so that the obtained standard action information is more reasonable.
In one embodiment, the oral cleaning device is further used for acquiring a plurality of pieces of recorded action information obtained in the current oral cleaning area; determining a second face image range according to the maximum value and the minimum value of the plurality of pieces of second face information, wherein the plurality of pieces of second face information are obtained according to the second face images in the plurality of pieces of input action information; determining a second attitude information range according to the maximum value and the minimum value of second attitude information in the plurality of input action information; obtaining an input action information range according to the second face image range and the second posture information range, wherein the input action information comprises the second face image range and the second posture information range; and determining the input action information range as the standard action information corresponding to the current oral cleaning area.
In the embodiment of the application, as for the process of determining the standard action information corresponding to each oral cleaning area according to the input action information, specifically, the process of inputting the action information can be repeatedly acquired after the oral cleaning device acquires the input action information acquired in each oral cleaning area, so that a plurality of input action information acquired in each oral cleaning area is obtained. And each piece of input action information comprises a second face image and second posture information. For an oral cleaning area, the oral cleaning device may establish a facial image coordinate system, extract the same features from a plurality of facial images collected from the oral cleaning area, determine coordinates of the extracted features of each facial image in the facial image, and form facial information corresponding to the facial image by the coordinates of the extracted features of each facial image in the facial image. The face information range is obtained from the maximum value and the minimum value in the plurality of face information, that is, the maximum value and the minimum value of the abscissa and the maximum value and the minimum value of the ordinate in the plurality of coordinates. For example, if the coordinates of three facial information corresponding to one oral cleaning region in the established facial image coordinate system are (1,1), (1,10) and (2,8), respectively, it is known that the maximum value and the minimum value of the abscissa are 2 and 1, and the maximum value and the minimum value of the ordinate are 10 and 1, the obtained facial information range is a rectangular range with 1-2 abscissa and 1-10 ordinate.
The oral cleaning device determines a second posture information range according to the maximum value and the minimum value of the second posture information in the plurality of pieces of recorded action information. For example, one oral cleaning region corresponds to three posture information, each of which includes a velocity, an acceleration, and an angular velocity, wherein the velocity and the acceleration indicate a direction from the maxilla to the mandible when negative values, and indicate a direction from the mandible to the maxilla when positive values. The three attitude information are (1,1,1), (-1,2,8) and (3, -1,5), respectively. The velocity range is known as [ -1,3], the acceleration range is [ -1,1], and the angular velocity range is [1,8 ]. And if the speed, the acceleration and the angular speed in each attitude information meet the corresponding ranges, the attitude information conforms to the standard action information.
The second face information range and the second posture information range form an input action information range corresponding to the oral cleaning area. And similarly, acquiring the input action information range corresponding to other oral cleaning areas. The oral cleaning device takes the input action information range corresponding to each oral cleaning area as the standard action information corresponding to each oral cleaning area. The input action information range composed of the range of the face information and the range of the second posture information is used as standard action information, and the input action information can be subjected to noise reduction processing, so that the obtained standard action information is more reasonable.
In one embodiment, the oral cleaning device is further configured to determine the switched oral cleaning area as a new current oral cleaning area if the oral cleaning device performs oral cleaning area switching; acquiring third movement track information of a user through a camera module and acquiring fourth movement track information of the user through a sensor module to obtain input track information, wherein the input track information comprises the third movement track information and the fourth movement track information; and determining standard action information corresponding to the current oral cleaning area according to the input action information, and determining standard track information corresponding to the current oral cleaning area according to the input track information.
In the embodiment of the application, when the oral cleaning device performs the area switching of the oral cleaning device, the switched oral cleaning area is determined as a new current oral cleaning area. The oral cleaning device can acquire a plurality of facial images to form third moving track information when the oral cleaning device is switched in the area through the camera module, and acquire a plurality of posture information to form fourth moving track information when the oral cleaning device is switched in the area through the posture sensor module. And the oral cleaning device forms the input track information by the collected third moving track information and the fourth moving track information, and takes the input track information as the input track information corresponding to the oral cleaning area to be switched to. For example, when the oral cleaning device moves from the oral cleaning region 1 to the oral cleaning region 2, the acquired entry trajectory information is entry trajectory information corresponding to the oral cleaning region 2.
The oral cavity cleaning device takes the input track information as the standard track information of the corresponding oral cavity cleaning area, wherein each oral cavity cleaning area can correspond to a plurality of standard track information which are respectively the standard track information when the oral cavity cleaning area is switched to the oral cavity cleaning area from different oral cavity cleaning areas. Through increasing the orbit information of typeeing, can further assist the discernment in oral cavity clean area, improve the accuracy of oral cavity clean area discernment.
In one embodiment, the external device is further configured to obtain guidance action information corresponding to each oral cleaning area; and sequentially outputting guide action information corresponding to each oral cleaning area according to a preset arrangement sequence of each oral cleaning area, wherein the guide action information is used for indicating standard action information input into the corresponding oral cleaning area according to the guide action information.
In the embodiment of the application, the external device may obtain guidance action information corresponding to each oral cleaning area, and the guidance action information may be stored in the server, the oral cleaning apparatus, or the external device itself. The external equipment sequentially outputs guide action information corresponding to each oral cleaning area according to a preset arrangement sequence of each oral cleaning area, wherein the guide action information is information indicating a user to input standard action information of the corresponding oral cleaning area according to the guide action information, namely indicating how the user realizes the standard action of each oral cleaning area. For the preset arrangement order of 16 oral cleaning regions in fig. 2, the preset may be performed in the order from the maxilla to the mandible, and from the left cheek to the right cheek; the preset may be performed in the order from the left cheek to the right cheek, and from the maxilla to the mandible, and this is not particularly limited. The method and the device have the advantages that the guiding action information is sequentially output through the preset arrangement sequence to guide the process of inputting the standard action information by the user, so that the efficiency of the process of inputting the standard action information by the user can be improved, and the accuracy of the input standard action information is improved.
In some embodiments, the instructional action information includes a schematic view of the various oral cleaning areas of fig. 2.
In the embodiment of the application, the schematic diagram of each oral cavity cleaning area is displayed, so that a user can conveniently find the corresponding oral cavity cleaning area, and the efficiency of the input process is further improved.
In one embodiment, the oral cleaning apparatus is further configured to transmit the second facial image and the second pose information to an external device;
the external equipment is also used for receiving the second face information and the second posture information sent by the oral cleaning device and generating input action information according to the second face image and the second posture information; and determining standard action information corresponding to the current oral cleaning area according to the input action information.
In this embodiment of the application, the specific execution steps of the external device generating the entry action information according to the second face information and the second posture information are the same as those in the above embodiment, and are not described herein again.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an oral cleaning area recognition apparatus according to an embodiment of the present application. The oral cleaning region recognition apparatus is applied to an oral cleaning apparatus, and as shown in fig. 6, the oral cleaning region recognition apparatus 600 may include: a data acquisition module 610 and a region determination module 620.
The data acquisition module 610 is configured to acquire a first facial image of the user through the camera module, acquire first posture information of the oral cleaning device through the posture sensor module, and generate first action information according to the first facial image and the first posture information.
The area determining module 620 is configured to match the first action information with the standard action information corresponding to each oral cleaning area, and determine a target oral cleaning area corresponding to the first action information.
As an optional implementation, the region determining module 620 is further configured to:
and matching the first action information with the standard action information corresponding to each oral cavity cleaning area respectively to obtain a matching result corresponding to each oral cavity cleaning area.
And screening out a target matching result from the matching results corresponding to the oral cavity cleaning areas, and determining the oral cavity cleaning area corresponding to the target matching result as a target oral cavity cleaning area.
As an optional implementation, the data acquisition module 610 is further configured to:
if the oral cleaning device switches the oral cleaning area, determining first moving track information according to the track image acquired by the camera module, and determining second moving track information according to the first track posture information acquired by the posture sensor module;
and determining a matching result according to the first action information, the first movement track information and the second movement track information, wherein the matching result is obtained by respectively matching the first action information with the standard action information corresponding to each oral cavity cleaning area and matching the first movement track information and the second movement track information with the standard track information corresponding to each oral cavity cleaning area.
As an optional implementation, the region determining module 620 is further configured to:
matching the first action information with standard action information corresponding to each oral cleaning area stored in the oral cleaning device to obtain a matching result; or,
transmitting the first action information to external equipment so that the external equipment matches the first action information with standard action information corresponding to each oral cavity cleaning area stored in the external equipment to obtain a matching result; and receiving the matching result transmitted by the external equipment.
Referring to fig. 7, fig. 7 is a schematic structural diagram of another oral cleaning area recognition device disclosed in the embodiment of the present application. Wherein the oral cleaning region identifying device shown in fig. 7 is further optimized by the oral cleaning region identifying device shown in fig. 6. Compared to the oral cleaning region recognition apparatus shown in fig. 6, the oral cleaning region recognition apparatus 600 shown in fig. 7 may further include:
a deviation calculating module 630, configured to calculate a deviation value between the first action information and the standard action information corresponding to the target oral cleaning region;
if the deviation value is larger than a preset deviation threshold value, determining that the first action information does not accord with the action standard;
and if the deviation value is not greater than the preset deviation threshold value, determining that the first action information meets the action standard.
As an alternative embodiment, the deviation value includes a first deviation value and a second deviation value; the preset deviation threshold includes a first preset deviation threshold and a second preset deviation threshold.
The deviation calculating module 630 is further configured to:
calculating a deviation value between the first action information and standard action information corresponding to the target oral cleaning area, including:
calculating a deviation value between the first posture information and standard posture information corresponding to the oral cleaning area to obtain a first deviation value;
calculating a deviation value between the first facial image and a standard facial image corresponding to the oral cleaning area to obtain a second deviation value;
and if the first deviation value is larger than a first preset deviation threshold value and/or the second deviation value is larger than a second preset deviation threshold value, determining that the first action information does not meet the action standard.
As an alternative implementation, the deviation calculating module 630 is further configured to:
if the fact that the oral cavity cleaning operation is completed in each oral cavity cleaning area is detected, obtaining a deviation value between first action information collected in each oral cavity cleaning area and standard action information corresponding to each oral cavity cleaning area;
and determining the score of the oral cleaning process according to the corresponding deviation value of each oral cleaning area.
As an alternative implementation, the deviation calculating module 630 is further configured to:
according to the deviation value corresponding to the first oral cavity cleaning area, determining a target deviation value interval in which the deviation value falls from a plurality of deviation value intervals corresponding to the first oral cavity cleaning area, and determining a score corresponding to the target deviation value interval as the score of the first oral cavity cleaning area; wherein the first oral cleaning region is any one of the plurality of regions, and the different deviation value intervals correspond to different scores respectively.
The sum of the scores of the first motion information collected at each oral cleaning region is determined as a score of the oral cleaning process.
As an alternative implementation, the deviation calculating module 630 is further configured to:
if the fact that the oral cavity cleaning operation is completed in each oral cavity cleaning area is detected, obtaining a deviation value between first action information collected in each oral cavity cleaning area and standard action information corresponding to each oral cavity cleaning area;
and determining the score of the oral cleaning process according to the corresponding deviation value of each oral cleaning area.
Referring to fig. 8, fig. 8 is a schematic structural diagram of another oral cleaning region identification device disclosed in the embodiment of the present application. Among them, the oral cleaning region identifying device shown in fig. 8 is further optimized by the oral cleaning region identifying device shown in fig. 6. Compared to the oral cleaning region recognition apparatus shown in fig. 6, the oral cleaning region recognition apparatus 600 shown in fig. 8 may further include:
an information prompt module 640, configured to generate first prompt information;
outputting first prompt information, wherein the first prompt information is used for prompting that the first action information does not conform to the standard action;
and playing action guide content corresponding to the standard action information of the target oral cavity cleaning area, wherein the action guide content is used for correcting the oral cavity cleaning action acted on the current target oral cavity cleaning area.
As an optional implementation, the information prompt module 640 is further configured to:
recording the duration of the oral cleaning device in the target oral cleaning area to obtain the cleaning time of the oral cavity in the area;
and when the cleaning time of the oral cavity in the region is greater than the preset time threshold, generating second prompt information and outputting the second prompt information, wherein the second prompt information is used for prompting that the oral cavity cleaning device is located in the target oral cavity cleaning region too long.
As an optional implementation, the information prompt module 640 is further configured to:
if the target oral cavity cleaning area is detected to finish the oral cavity cleaning operation, generating third prompt information and outputting the third prompt information, wherein the third prompt information is used for prompting that the oral cavity cleaning device is moved from the target oral cavity cleaning area to the next oral cavity cleaning area;
and if the oral cleaning operation is detected to be completed in each oral cleaning area, controlling the oral cleaning device to stop working.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an oral cleaning device according to an embodiment. As shown in fig. 9, the oral cleaning device 900 may include:
a memory 910 storing executable program code;
a processor 920 coupled with the memory 910;
the processor 920 calls the executable program code stored in the memory 910 to execute any of the oral cleaning area identification methods disclosed in the embodiments of the present application.
It should be noted that the oral cleaning device shown in fig. 9 may further include components, which are not shown, such as a power source, an input button, a camera, a speaker, a screen, an RF circuit, a Wi-Fi module, a bluetooth module, and a sensor, which are not described in detail in this embodiment.
The embodiment of the application discloses a computer-readable storage medium which stores a computer program, wherein the computer program enables a computer to execute any one of the oral cavity cleaning area identification method or the tooth brushing information entry method disclosed in the embodiment of the application.
A computer program product is disclosed in an embodiment of the present application, the computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform any of the oral cleaning area identification methods disclosed in embodiments of the present application.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present application, which is a part of or contributes to the prior art in essence, or all or part of the technical solution, may be embodied in the form of a software product, stored in a memory, including several requests for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute part or all of the steps of the above-described method of the embodiments of the present application.
It will be understood by those skilled in the art that all or part of the steps in the methods of the embodiments described above may be implemented by hardware instructions of a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, or a combination thereof, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The oral cleaning area identification method, the tooth brushing information entry system and the related devices disclosed in the embodiments of the present application are described in detail above, and specific examples are applied herein to illustrate the principles and implementations of the present application, and the descriptions of the above embodiments are only used to help understand the method and the core ideas of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (24)
1. An oral cleaning area recognition method is applied to an oral cleaning device, the oral cleaning device comprises a camera module and a posture sensor module, and the method comprises the following steps:
acquiring a first face image of a user through the camera module, acquiring first posture information of the oral cleaning device through the posture sensor module, and generating first action information according to the first face image and the first posture information;
and respectively matching the first action information with the standard action information corresponding to each oral cavity cleaning area, and determining a target oral cavity cleaning area corresponding to the first action information.
2. The method of claim 1, wherein the matching the first action information with the standard action information corresponding to each brushing oral cleaning area respectively, and the determining the target oral cleaning area corresponding to the first action information comprises:
matching the first action information with standard action information corresponding to each oral cavity cleaning area respectively to obtain a matching result corresponding to each oral cavity cleaning area;
and screening out a target matching result from the matching results corresponding to the oral cavity cleaning areas, and determining the oral cavity cleaning area corresponding to the target matching result as the target oral cavity cleaning area.
3. The method according to claim 1, wherein before the matching the first action information with the standard action information corresponding to each oral cleaning region respectively and determining the target oral cleaning region corresponding to the first action information, the method further comprises:
if the oral cleaning device switches oral cleaning areas, determining first moving track information according to the track image acquired by the camera module, and determining second moving track information according to the first track posture information acquired by the sensor module;
the step of matching the first action information with the standard action information corresponding to each oral cavity cleaning area to determine a target oral cavity cleaning area corresponding to the first action information includes:
and respectively matching the first action information with standard action information corresponding to each oral cavity cleaning area, matching the first movement track information and the second movement track information with standard track information corresponding to each oral cavity cleaning area, and determining a target oral cavity cleaning area corresponding to the first action information, the first movement track information and the second movement track information.
4. The method of claim 1, wherein the matching the first action information with the standard action information corresponding to each oral cleaning region respectively, and the determining the target oral cleaning region corresponding to the first action information comprises:
matching the first action information with standard action information corresponding to each oral cleaning area stored in the oral cleaning device, and determining a target oral cleaning area corresponding to the first action information; or,
and transmitting the first action information to an external device, so that the external device matches the first action information with standard action information corresponding to each oral cleaning area stored in the external device, and determines a target oral cleaning area corresponding to the first action information.
5. The method according to claim 1, further comprising, after the determining the target oral cleaning area corresponding to the first action information:
calculating a deviation value between the first action information and standard action information corresponding to the target oral cleaning area;
if the deviation value is larger than a preset deviation threshold value, determining that the first action information does not accord with an action standard;
and if the deviation value is not greater than the preset deviation threshold value, determining that the first action information meets an action standard.
6. The method of claim 5, wherein the offset value comprises a first offset value and a second offset value; the preset deviation threshold comprises a first preset deviation threshold and a second preset deviation threshold;
the calculating a deviation value between the first action information and standard action information corresponding to the target oral cleaning area comprises:
calculating a deviation value between the first posture information and standard posture information corresponding to the oral cleaning area to obtain a first deviation value;
calculating a deviation value between the first facial image and a standard facial image corresponding to the oral cleaning area to obtain a second deviation value;
if the deviation value is greater than a preset deviation threshold value, determining that the first action information does not meet a standard action, including:
and if the first deviation value is greater than the first preset deviation threshold value and/or the second deviation value is greater than the second preset deviation threshold value, determining that the first action information does not meet the action standard.
7. The method according to claim 5, wherein after determining that the first action information does not meet an action criterion if the deviation value is greater than a preset deviation threshold, further comprising:
generating first prompt information;
outputting the first prompt information, wherein the first prompt information is used for prompting that the first action information does not conform to a standard action;
and playing action guidance content corresponding to the standard action information of the target oral cavity cleaning area, wherein the action guidance content is used for correcting the oral cavity cleaning action acted on the target oral cavity cleaning area.
8. The method of claim 5, further comprising:
if the fact that the oral cleaning operation of each oral cleaning area is finished is detected, acquiring a deviation value between first action information acquired in each oral cleaning area and standard action information corresponding to each oral cleaning area;
and determining the score of the oral cleaning process according to the deviation values corresponding to the oral cleaning areas.
9. The method of claim 8, wherein said determining a score for an oral cleaning session based on each of said deviation values comprises:
according to a deviation value corresponding to a first oral cavity cleaning area, determining a target deviation value interval in which the deviation value falls from a plurality of deviation value intervals corresponding to the first oral cavity cleaning area, and determining a score corresponding to the target deviation value interval as the score of the first oral cavity cleaning area; wherein the first oral cleaning region is any one of a plurality of regions, and the different deviation value intervals correspond to different scores respectively;
determining a score for the oral cleaning process as a sum of the scores for each of the oral cleaning regions.
10. The method according to claim 1, further comprising, after the determining the target oral cleaning area corresponding to the first action information:
recording the duration of the oral cleaning device in the target oral cleaning area to obtain the area oral cleaning time;
and when the oral cleaning time of the region is greater than a preset time threshold, generating second prompt information and outputting the second prompt information, wherein the second prompt information is used for prompting that the oral cleaning device is located in the target oral cleaning region too long.
11. The method according to claim 1, further comprising, after the determining the target oral cleaning area corresponding to the first action information:
if the target oral cleaning area is detected to finish the oral cleaning operation, generating third prompt information and outputting the third prompt information, wherein the third prompt information is used for prompting the oral cleaning device to move from the target oral cleaning area to the next oral cleaning area;
and if the oral cleaning operation of each oral cleaning area is detected to be completed, controlling the oral cleaning device to stop working.
12. An oral cleaning area identification method applied to an external device, the method comprising:
receiving first action information transmitted by an oral cleaning device, wherein the first action information comprises a first facial image and first posture information acquired by the oral cleaning device;
matching the first action information with standard action information corresponding to each oral cavity cleaning area respectively to obtain a matching result;
transmitting the matching result to the oral cleaning device.
13. A tooth brushing information input system is characterized by comprising an oral cleaning device and external equipment, wherein the oral cleaning device comprises a camera module and an attitude sensor module;
the external equipment is used for outputting guide information corresponding to an area to be input, the guide information is used for instructing a user to place a brush head of the oral cleaning device in a current oral cleaning area, and the current oral cleaning area is one of a plurality of divided oral cleaning areas;
the oral cleaning device is used for acquiring a current facial image of a user through the camera module and acquiring current posture information of the oral cleaning device through the posture sensor module; if the current oral cavity cleaning area is determined to be the area to be recorded according to the current facial image and the current posture information, generating a command to be recorded, and outputting the command to be recorded; if a determination instruction corresponding to the instruction to be input is received, responding to the determination instruction, acquiring a second facial image of the user through the camera module, acquiring second posture information of the oral cleaning device through the sensor module, and generating input action information according to the second facial image and the second posture information; and determining standard action information corresponding to the current oral cleaning area according to the input action information.
14. The system of claim 13, wherein the oral cleaning device is further configured to end brushing information entry if each of the oral cleaning regions has determined corresponding standard action information.
15. The system according to claim 13, wherein the oral cleaning device is further configured to generate the determination instruction if an entry operation triggered by an entry key on the oral cleaning device is detected; or continuously acquiring the posture information of the oral cleaning device through the sensor module, and if the posture information acquired within a preset time period is kept unchanged, generating the determination instruction.
16. The system of claim 13, wherein the oral cleaning device is further configured to obtain a plurality of entered action information obtained at the current oral cleaning area; calculating a mean value of a plurality of pieces of face information, obtaining mean face information, and calculating a mean value of second posture information in the plurality of pieces of entered motion information, obtaining mean posture information, wherein the plurality of pieces of face information are obtained from a plurality of second face images; obtaining average input action information according to the average face information and the average posture information, wherein the average input action information comprises the average face information and the average posture information; and determining the average recorded action information as standard action information corresponding to the current oral cleaning area.
17. The system of claim 13, wherein the oral cleaning device is further configured to obtain a plurality of entered action information obtained at the current oral cleaning area; determining a second face image range according to the maximum value and the minimum value of a plurality of pieces of second face information, wherein the plurality of pieces of second face information are obtained according to the second face images in the plurality of pieces of recorded action information; determining a second attitude information range according to the maximum value and the minimum value of second attitude information in the plurality of input action information; obtaining an input action information range according to the second face image range and the second posture information range, wherein the input action information comprises the second face image range and the second posture information range; and determining the input action information range as the standard action information corresponding to the current oral cleaning area.
18. The system of claim 13, wherein the oral cleaning device is further configured to determine the switched oral cleaning area as a new current oral cleaning area if the oral cleaning device performs an oral cleaning area switching; acquiring third movement track information of a user through the camera module and acquiring fourth movement track information of the user through the sensor module to obtain input track information, wherein the input track information comprises the third movement track information and the fourth movement track information; and determining standard action information corresponding to the current oral cleaning area according to the input action information, and determining standard track information corresponding to the current oral cleaning area according to the input track information.
19. The system of claim 13, wherein the external device is further configured to obtain guiding action information corresponding to each oral cleaning area; and sequentially outputting guide action information corresponding to each oral cleaning area according to a preset arrangement sequence of each oral cleaning area, wherein the guide action information is used for indicating standard action information input into the corresponding oral cleaning area according to the guide action information.
20. The system of claim 13, wherein the oral cleaning apparatus is further configured to send the second facial image and second pose information to the external device;
the external device is further configured to: receiving second face information and second posture information sent by the oral cleaning device, and generating input action information according to the second face image and the second posture information; and determining standard action information corresponding to the current oral cleaning area according to the input action information.
21. An oral cleaning area identification device, the device comprising:
the data acquisition module is used for acquiring a first face image of a user through the camera module, acquiring first posture information of the oral cleaning device through the posture sensor module, and generating first action information according to the first face image and the first posture information;
and the area determining module is used for matching the first action information with the standard action information corresponding to each oral cavity cleaning area respectively and determining a target oral cavity cleaning area corresponding to the first action information.
22. An oral cleaning device comprising a memory and a processor, a computer program stored in the memory, which computer program, when executed by the processor, causes the processor to carry out the method of any one of claims 1 to 11.
23. An external device, comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the method of claim 12.
24. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 11, or 12.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111602391.0A CN114299576B (en) | 2021-12-24 | 2021-12-24 | Oral cavity cleaning area identification method, tooth brushing information input system and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111602391.0A CN114299576B (en) | 2021-12-24 | 2021-12-24 | Oral cavity cleaning area identification method, tooth brushing information input system and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114299576A true CN114299576A (en) | 2022-04-08 |
CN114299576B CN114299576B (en) | 2024-08-27 |
Family
ID=80969708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111602391.0A Active CN114299576B (en) | 2021-12-24 | 2021-12-24 | Oral cavity cleaning area identification method, tooth brushing information input system and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114299576B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117598825A (en) * | 2023-06-29 | 2024-02-27 | 广州星际悦动股份有限公司 | Oral care region identification method, device, electric toothbrush and storage medium |
CN117811918A (en) * | 2024-01-05 | 2024-04-02 | 广州星际悦动股份有限公司 | Theme configuration method and device, electronic equipment and storage medium |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130057688A (en) * | 2011-11-24 | 2013-06-03 | (주)케이엔씨웰빙 | Tooth brushing pattern analyzing device and analyzing system |
KR20150113647A (en) * | 2014-03-31 | 2015-10-08 | 주식회사 로보프린트 | Toothbrush with a camera and tooth medical examination system using this |
CN105877861A (en) * | 2016-06-11 | 2016-08-24 | 淮阴工学院 | Multi-degree-of-freedom intelligent oral cavity cleaning system |
CN106940793A (en) * | 2017-03-22 | 2017-07-11 | 上海大学 | A kind of posture processing method and system based on apparatus for cleaning oral cavity |
CN107832719A (en) * | 2017-11-15 | 2018-03-23 | 北京朗萌科技发展有限公司 | Brush teeth analysis method, device and toothbrushing systems |
CN108389488A (en) * | 2018-03-05 | 2018-08-10 | 泉州医学高等专科学校 | A kind of interactive oral cavity simulation system |
CN108495575A (en) * | 2015-12-22 | 2018-09-04 | 皇家飞利浦有限公司 | System, method and apparatus for providing guiding and feedback based on position and performance |
CN108596162A (en) * | 2018-07-02 | 2018-09-28 | 孟薇 | One kind is brushed teeth monitoring system |
CN110213980A (en) * | 2016-08-22 | 2019-09-06 | 科利布里有限公司 | Oral hygiene system and long-range-dental system for compliance monitoring |
CN110495962A (en) * | 2019-08-26 | 2019-11-26 | 赫比(上海)家用电器产品有限公司 | The method and its toothbrush and equipment of monitoring toothbrush position |
US20200022488A1 (en) * | 2016-12-01 | 2020-01-23 | Koninklijke Philips N.V. | Method for determining the orientation of a user's head during teeth cleaning |
CN110780705A (en) * | 2019-10-31 | 2020-02-11 | 北京大学深圳医院 | Intelligent tooth brushing guider |
CN111064810A (en) * | 2019-12-31 | 2020-04-24 | 广州皓醒湾科技有限公司 | Oral cavity cleaning interaction system |
CN111465333A (en) * | 2017-12-28 | 2020-07-28 | 高露洁-棕榄公司 | Oral hygiene system |
KR102140972B1 (en) * | 2019-05-27 | 2020-08-05 | 주식회사 아이티로 | Method, apparatus and computer-readable recording medium for providing cleaning service and contents based on scenario |
CN112057195A (en) * | 2020-09-10 | 2020-12-11 | 湖北咿呀医疗投资管理股份有限公司 | Control method, device and equipment for intelligently matching tooth brushing mode and storage medium |
CN112333439A (en) * | 2020-10-30 | 2021-02-05 | 南京维沃软件技术有限公司 | Face cleaning equipment control method and device and electronic equipment |
CN112749634A (en) * | 2020-12-28 | 2021-05-04 | 广州星际悦动股份有限公司 | Control method and device based on beauty equipment and electronic equipment |
CN112990321A (en) * | 2021-03-19 | 2021-06-18 | 有品国际科技(深圳)有限责任公司 | Tooth brushing area identification method and device based on tree network, toothbrush and medium |
CN113116580A (en) * | 2019-12-30 | 2021-07-16 | 广州星际悦动股份有限公司 | Dynamic adjusting system and method and electric toothbrush |
CN113516175A (en) * | 2021-06-04 | 2021-10-19 | 有品国际科技(深圳)有限责任公司 | Tooth brushing area identification method and device, tooth brushing equipment and storage medium |
WO2021238335A1 (en) * | 2020-05-29 | 2021-12-02 | 华为技术有限公司 | Toothbrush control method, smart toothbrush, and toothbrush system |
-
2021
- 2021-12-24 CN CN202111602391.0A patent/CN114299576B/en active Active
Patent Citations (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130057688A (en) * | 2011-11-24 | 2013-06-03 | (주)케이엔씨웰빙 | Tooth brushing pattern analyzing device and analyzing system |
KR20150113647A (en) * | 2014-03-31 | 2015-10-08 | 주식회사 로보프린트 | Toothbrush with a camera and tooth medical examination system using this |
CN108495575A (en) * | 2015-12-22 | 2018-09-04 | 皇家飞利浦有限公司 | System, method and apparatus for providing guiding and feedback based on position and performance |
CN105877861A (en) * | 2016-06-11 | 2016-08-24 | 淮阴工学院 | Multi-degree-of-freedom intelligent oral cavity cleaning system |
CN110213980A (en) * | 2016-08-22 | 2019-09-06 | 科利布里有限公司 | Oral hygiene system and long-range-dental system for compliance monitoring |
US20200179089A1 (en) * | 2016-08-22 | 2020-06-11 | Kolibree SAS | Oral Hygiene System for Compliance Monitoring and Tele-Dentistry System |
US20200022488A1 (en) * | 2016-12-01 | 2020-01-23 | Koninklijke Philips N.V. | Method for determining the orientation of a user's head during teeth cleaning |
CN106940793A (en) * | 2017-03-22 | 2017-07-11 | 上海大学 | A kind of posture processing method and system based on apparatus for cleaning oral cavity |
CN107832719A (en) * | 2017-11-15 | 2018-03-23 | 北京朗萌科技发展有限公司 | Brush teeth analysis method, device and toothbrushing systems |
CN111465333A (en) * | 2017-12-28 | 2020-07-28 | 高露洁-棕榄公司 | Oral hygiene system |
CN108389488A (en) * | 2018-03-05 | 2018-08-10 | 泉州医学高等专科学校 | A kind of interactive oral cavity simulation system |
CN108596162A (en) * | 2018-07-02 | 2018-09-28 | 孟薇 | One kind is brushed teeth monitoring system |
KR102140972B1 (en) * | 2019-05-27 | 2020-08-05 | 주식회사 아이티로 | Method, apparatus and computer-readable recording medium for providing cleaning service and contents based on scenario |
CN110495962A (en) * | 2019-08-26 | 2019-11-26 | 赫比(上海)家用电器产品有限公司 | The method and its toothbrush and equipment of monitoring toothbrush position |
CN110780705A (en) * | 2019-10-31 | 2020-02-11 | 北京大学深圳医院 | Intelligent tooth brushing guider |
CN113116580A (en) * | 2019-12-30 | 2021-07-16 | 广州星际悦动股份有限公司 | Dynamic adjusting system and method and electric toothbrush |
CN111064810A (en) * | 2019-12-31 | 2020-04-24 | 广州皓醒湾科技有限公司 | Oral cavity cleaning interaction system |
WO2021238335A1 (en) * | 2020-05-29 | 2021-12-02 | 华为技术有限公司 | Toothbrush control method, smart toothbrush, and toothbrush system |
CN112057195A (en) * | 2020-09-10 | 2020-12-11 | 湖北咿呀医疗投资管理股份有限公司 | Control method, device and equipment for intelligently matching tooth brushing mode and storage medium |
CN112333439A (en) * | 2020-10-30 | 2021-02-05 | 南京维沃软件技术有限公司 | Face cleaning equipment control method and device and electronic equipment |
CN112749634A (en) * | 2020-12-28 | 2021-05-04 | 广州星际悦动股份有限公司 | Control method and device based on beauty equipment and electronic equipment |
CN112990321A (en) * | 2021-03-19 | 2021-06-18 | 有品国际科技(深圳)有限责任公司 | Tooth brushing area identification method and device based on tree network, toothbrush and medium |
CN113516175A (en) * | 2021-06-04 | 2021-10-19 | 有品国际科技(深圳)有限责任公司 | Tooth brushing area identification method and device, tooth brushing equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
윤현서; 채유정: ""Oral health care awareness levels according to the oral health education experience of some local adults"", 《 JOURNAL OF KOREAN SOCIETY OF ORAL HEALTH SCIENCE》, vol. 5, no. 2, 31 January 2017 (2017-01-31), pages 35 - 39 * |
刘晓芬: ""青少年固定正畸患者口腔健康自我管理"", 《上海护理》, vol. 20, no. 06, 30 June 2020 (2020-06-30) * |
卢佩佩;陈龙;陈海漫;沈梅洁;丁熙;: "口腔种植电子病历系统的研发及临床应用", 《中国口腔种植学杂志》, no. 04 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117598825A (en) * | 2023-06-29 | 2024-02-27 | 广州星际悦动股份有限公司 | Oral care region identification method, device, electric toothbrush and storage medium |
CN117811918A (en) * | 2024-01-05 | 2024-04-02 | 广州星际悦动股份有限公司 | Theme configuration method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN114299576B (en) | 2024-08-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114299576A (en) | Oral cavity cleaning area identification method, tooth brushing information input system and related device | |
CN108814747B (en) | Intelligent electric toothbrush and control method thereof | |
RU2719839C2 (en) | Methods and systems for retrieving user motion characteristics using hall sensor for providing feedback to user | |
CN110269389B (en) | Toothbrush system and toothbrush system scoring monitoring method | |
US20190045916A1 (en) | Method and system for a achieving optimal oral hygiene by means of feedback | |
CN114329991A (en) | Oral cavity cleaning scoring method and device, oral cavity cleaning device and storage medium | |
CN108495575B (en) | Systems, methods, and apparatus for providing guidance and feedback based on location and performance | |
JP6914939B2 (en) | Methods and systems for providing feedback on brushing sessions | |
US20200069042A1 (en) | Method and system for localization of an oral cleaning device | |
CN109108962A (en) | Robot, the control method of robot and storage medium | |
CN104144252A (en) | Voice communication method and mobile terminal | |
WO2016082784A1 (en) | Child teeth brushing smart training system | |
CN110658742A (en) | Multi-mode cooperative control wheelchair control system and method | |
EP3668346A1 (en) | Toothbrush coaching system | |
CN114343901A (en) | Control method of oral cleaning device and setting method of oral cleaning strategy | |
CN210185751U (en) | System for auxiliary detection gingival sulcus bleeding index | |
CN111383750A (en) | Tooth brushing control method and related device | |
CN114360020B (en) | Oral cleaning area identification method and device, electronic equipment and storage medium | |
JP2020126195A (en) | Voice interactive device, control device for voice interactive device and control program | |
CN117918984A (en) | Method and device for controlling oral cavity cleaning equipment, oral cavity cleaning equipment and storage medium | |
US20190320785A1 (en) | Method and system for determining compliance with a guided cleaning session | |
JP2019141582A (en) | Oral cavity camera system and oral camera | |
CN110489005B (en) | Two-dimensional point display with touch positioning function and two-dimensional contact driving method thereof | |
CN115543135A (en) | Control method, device and equipment for display screen | |
CN113343788A (en) | Image acquisition method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |