CN112388643A - Multifunctional health science popularization robot - Google Patents

Multifunctional health science popularization robot Download PDF

Info

Publication number
CN112388643A
CN112388643A CN202011013310.9A CN202011013310A CN112388643A CN 112388643 A CN112388643 A CN 112388643A CN 202011013310 A CN202011013310 A CN 202011013310A CN 112388643 A CN112388643 A CN 112388643A
Authority
CN
China
Prior art keywords
module
database
science popularization
input
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011013310.9A
Other languages
Chinese (zh)
Inventor
杨静
唐燕来
李雅清
孟凡琪
刘俊研
王钊燕
樊重
黄丹萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Yat Sen University
Original Assignee
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sun Yat Sen University filed Critical Sun Yat Sen University
Priority to CN202011013310.9A priority Critical patent/CN112388643A/en
Publication of CN112388643A publication Critical patent/CN112388643A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Robotics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Multimedia (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a multifunctional health science popularization robot, which comprises: the system comprises an external input module, an internal storage and database module and a data interface module, wherein the external input module is connected with the internal storage and database module through the data interface module, the external input module is used for inputting interaction information of a user, the internal storage and database module is used for storing video resources, a K-V service database and a face information database, and the data interface module is used for identifying an interface for obtaining a classification result by user input and transmitting a search result to an output end. The multifunctional health science popularization robot is constructed through the external input module, the internal storage and database module and the data interface module, the requirements of health science popularization are met based on external abundant interactive input and preset internal storage and data, and the functions of the health science popularization robot are enriched.

Description

Multifunctional health science popularization robot
Technical Field
The invention relates to the technical field of robots, in particular to a multifunctional health science popularization robot.
Background
Currently, the medical management of China has been gradually changed to the health management, and a good progress is achieved, but the medical science popularization problem of children left in rural areas is still not negligible. More than 6000 million rural left-behind children are shared in rural areas in China, and the medical resources are more backward compared with those in urban and rural areas, and local people lack necessary medical knowledge, so that it is very important to walk medical knowledge into rural areas. The guidance of relevant work of the government makes special institutional arrangement for caring and protecting the children left in the countryside. In addition, a rural left-behind child care protection working department interstellar joint conference system is established, which is enough to explain the current situation of the left-behind children by the nation. Therefore, it is necessary to pay attention to the healthy growth of the left-behind children, and provide the medical knowledge to the children and provide the children with a query function to know the necessary medical knowledge.
The most important application scene of current robot is in industrial field because some industrial activities are more dangerous, and are harmful to the human body, and artificial security is not high moreover, carries out the mode that can reduce dangerous accident with the robot substitute people and work. Recently, robots are more and more going to ordinary life, such as the field of education, the field of agriculture, and the like. However, the application of the robot in the medical science popularization direction is still in the starting stage, so that a healthy science popularization robot needs to be researched and developed, the robot technology is enabled to go into the medical field, and the science popularization and reference function of medical knowledge is performed for the children left behind in poor areas.
In the prior art, publication No. CN208880724U, chinese utility model patent discloses an agricultural technology science popularization robot in 2019, 5 months and 21 days, including: a body and a wheeled chassis; the wheel type chassis is connected with the lower surface of the body and used for driving the body to walk; a microprocessor, an audio input and output assembly and a voice interaction assembly are arranged in the body; the microprocessor is respectively connected with the audio input and output component and the voice interaction component; agricultural technical science popularization knowledge is stored in the voice interaction assembly. The scheme has simple component design and can not provide deeper query and interaction functions.
Disclosure of Invention
The invention provides a multifunctional health science popularization robot for overcoming the defects that no special health science popularization robot exists in the prior art and medical health knowledge science popularization cannot be carried out by the aid of the robot.
The primary objective of the present invention is to solve the above technical problems, and the technical solution of the present invention is as follows:
multifunctional health science popularization robot, including: the system comprises an external input module, an internal storage and database module and a data interface module, wherein the external input module is connected with the internal storage and database module through the data interface module, the external input module is used for inputting interaction information of a user, the internal storage and database module is used for storing video resources, a K-V service database and a face information database, and the data interface module is used for identifying an interface for obtaining a classification result by user input and transmitting a search result to an output end.
In this scheme, the external input module includes: camera, infrared module, microphone, touch acquisition module, LED touch-sensitive screen, switch, external equipment interface, power source, net gape, action wheel, the camera is used for acquireing face image and scene image, infrared module is used for the range finding, the microphone is used for gathering pronunciation input signal, touch acquisition module is used for fingerprint collection and fingerprint verification, the LED touch-sensitive screen is used for user's interactive input and inquiry result output, the external equipment interface is used for connecting external equipment, switch is used for controlling the break-make of robot power, power source is used for being used for the robot power supply, the net gape is used for internet access, the action wheel is used for the removal of robot.
In the scheme, the video resource is a preset health science popularization video, the face recognition database is a dictionary storage structure based on key information matching and an input face information database, and the K-V service database is preset with problem categories and corresponding solving methods.
In the scheme, a user can access video resources through the LED touch screen or the external equipment interface.
In this scheme, the external device of external device interface connection includes: a keyboard and a mouse.
In this scheme, the external input module further comprises a wireless communication module.
In the scheme, the data interface module comprises an identification input interface, the identification method of the identification input interface is determined according to the input type of a user, and when the user can directly search through the input selected by the touch of the LED touch screen, the LED touch screen directly provides preset options to be selected;
and when the user inputs the data through characters or voice, classifying the user input through a preset machine learning model, searching a corresponding result in the K-V service database and returning the result.
In the scheme, the input of the user obtains the value of the Key end of the matched database through a machine learning model, namely the classification required by the user, and then the corresponding result is returned to the output end.
In this scheme, the machine learning model construction process includes: the method comprises the steps of utilizing the existing natural language corpus and a voice database to conduct data analysis and extract primitive language features, utilizing a machine learning algorithm to conduct statistical modeling on the extracted primitive language features, and then integrating the model on a microprocessor of a robot.
In the scheme, the face database trains face image data by extracting LBP characteristic information of the face image to realize the face recognition function.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that:
the multifunctional health science popularization robot is constructed through the external input module, the internal storage and database module and the data interface module, the requirements of health science popularization are met based on external abundant interactive input and preset internal storage and data, and the functions of the health science popularization robot are enriched.
Drawings
Fig. 1 is a schematic block diagram of a multifunctional health science popularization robot system according to the present invention.
Fig. 2 is a schematic diagram of an external input module of the multifunctional health science popularization robot.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
Example 1
As shown in fig. 1, the multifunctional health science popularization robot comprises: the system comprises an external input module, an internal storage and database module and a data interface module, wherein the external input module is connected with the internal storage and database module through the data interface module, the external input module is used for inputting interaction information of a user, the internal storage and database module is used for storing video resources, a K-V service database and a face information database, and the data interface module is used for identifying an interface for obtaining a classification result by user input and transmitting a search result to an output end.
As shown in fig. 2, in this solution, the external input module includes: camera 1, infrared module 2, microphone 3, touch acquisition module 4, LED touch-sensitive screen 5, external equipment interface 8, switch 9, power source 10, net gape 7, action wheel 6, camera 1 is used for acquireing face image and scene image, infrared module 2 is used for the range finding, microphone 3 is used for gathering pronunciation input signal, touch acquisition module 4 is used for fingerprint collection and fingerprint verification, LED touch-sensitive screen 5 is used for user's interactive input and inquiry result output, external equipment interface 8 is used for connecting external equipment, switch 9 is used for controlling the break-make of robot power, power source 10 is used for the robot power supply, net gape 7 is used for the internet access, action wheel 6 is used for the removal of robot.
In a specific embodiment, the camera and the infrared module device can identify whether a scene is safe or not through image-captured information, so as to assist in watching children, the external input module is used for receiving external input and generating output to a user, wherein the LED touch screen can be used as a touch input carrier and can also be used as an output carrier, the external input interface can be connected with external devices as external input, such as a keyboard and a mouse, the external input interface can be a USB interface, the microphone can be used as a voice input, the internet access can be connected with an external network for network communication, the invention further provides a wireless communication module, the wireless communication module can be a WIFI module and a bluetooth module, the robot can update video resources through the network, the power interface further comprises a charging device and a battery, the charging device can be connected with an external power supply for charging the battery, the robot can work for a long time when being separated from the mains supply connection, and the travelling wheel can facilitate convenient movement of the robot.
In a specific embodiment, the external input module further includes a temperature sensor and a humidity sensor, the temperature sensor is used for detecting the ambient temperature, and the humidity sensor is used for detecting the ambient humidity.
In a specific embodiment, the LED touch screen may adopt a capacitive touch module based on a touch detection IC (TTP223B), wherein the capacitive touch module based on the touch detection IC (TTP223B) outputs a low level in a normal state and is in a low power mode, and the module switches the output to a high level when a trigger event occurs and is in a high power mode. The module is installed outside a glass shell of the robot, and has better usability compared with common push type keys for inputting. This model has the advantage of low-power consumption, and power supply voltage possesses three control interface including GND, VCC and SIG between 2 ~ 2.5V. GND is grounded, VCC provides power, and SIG provides output.
In a specific embodiment, the microphone may send the collected voice signal to a voice processing chip with a chip model LD3320, and the voice processing chip performs spectrum analysis on the received signal, extracts features, matches the received signal with an edited keyword list one by one according to an algorithm in the voice processing chip, and sends a result with the highest matching rate back to the microprocessor of the robot in the form of a serial port. The speech recognition process is to convert the content spoken by the user into speech features through a frequency spectrum, match the speech features with the entries in the keyword list one by one, and take the best matched one as a recognition result. The voice processing chip analyzes the received signal frequency spectrum, extracts functions, adjusts specific algorithms in the voice processing chip one by one, edits a keyword list and returns a result with the highest matching rate to the robot processor interface in a serial mode. The language identification process is to convert the description of the user on the frequency spectrum into a language function and to customize the elements in the keyword list. The language identification process is to convert the description of the user on the frequency spectrum into a language function and to customize the elements in the keyword list. The keyword list is placed in the voice chip, the collected voice information is matched and compared with the originally placed sentences to obtain an optimal result, the result is sent to the raspberry group through the serial port, and the raspberry group serving as a core controller can control the display to send necessary response messages.
In the scheme, the video resource is a preset health science popularization video, the face recognition database is a dictionary storage structure based on key information matching and an input face information database, and the K-V service database is preset with problem categories and corresponding solving methods.
In a specific embodiment, the health science popularization video may be a medical science popularization video recorded by a professional or a team in a medical background, including: the medical knowledge can be acquired by watching the science popularization video. And the K-V service database encapsulates the class of the common knowledge as a searched Key value, and imports a corresponding result into the database in advance. When the user inputs information in different modes, the classification value is obtained through the processing of the machine learning model of the middle part, the result is obtained through query operation, and finally the result is returned to the output part. In order to achieve the function of assisting in watching children and need to train a model for recognizing human faces, rapid human face information is input in advance, feature processing is carried out on the human face information, and the human face information is stored in an internal training model. The specific process is as follows: the robot is powered on, waits for a system platform IoT of the robot to be connected with an oceanconnect platform of the robot, selects a face registration mode in advance, continuously shoots 5 photos and stores the photos in an SD card (the green light flashes 50ms during shooting and the green light flashes 1000ms after shooting), and then finishes the steps of data acquisition and feature extraction through a feature extraction module.
In the scheme, a user can access video resources through the LED touch screen or the external equipment interface.
In this scheme, the external input module further comprises a wireless communication module.
In the scheme, the data interface module comprises an identification input interface, the identification method of the identification input interface is determined according to the input type of a user, and when the user can directly search through the input selected by the touch of the LED touch screen, the LED touch screen directly provides preset options to be selected;
and when the user inputs the data through characters or voice, classifying the user input through a preset machine learning model, searching a corresponding result in the K-V service database and returning the result.
In a specific embodiment, the data interface module is used for converting external different types of inputs into Key classification results of a database, and processing modules for different types of inputs need to be developed due to the heterogeneity of the inputs; for example, common symptom requirements such as abdominal pain and fever are directly provided, and then the processing module directly performs matching to obtain the result.
In a specific implementation process, data can be imported into the internal storage and database module, and then corresponding reading modes can be written for different types of input. The video can be directly played by clicking the video file position or inputting an external device. For the query function, the programming mode is directly related to the input mode. The touch screen input is directly converted into the category of the question to be inquired in a signal conversion mode. The voice input is converted into a digital signal through a chip LD3320, medical data is trained by using a machine learning model, the model obtained by directing the input to the training is converted into a problem category, and a final result is obtained through a K-V service database. And character input and voice input are realized, an additional conversion module is not needed, the problem category is obtained by importing the problem category into a classification model through an interface, and a final result is obtained and returned through a K-V service database.
In the scheme, the input of the user obtains the value of the Key end of the matched database through a machine learning model, namely the classification required by the user, and then the corresponding result is returned to the output end.
In this scheme, the machine learning model construction process includes: the method comprises the steps of utilizing the existing natural language corpus and a voice database to conduct data analysis and extract primitive language features, utilizing a machine learning algorithm to conduct statistical modeling on the extracted primitive language features, and then integrating the model on a microprocessor of a robot.
In the scheme, the face database trains face image data by extracting LBP characteristic information of the face image to realize the face recognition function.
It should be noted that the LBP feature has the advantages of gray scale invariance and rotation invariance. LBP characteristics are extracted from key areas of the face image, and a plurality of groups of key points are compared to be used as comparison to obtain matching distances. Finally returning the integer distinguished by the descriptor. And finally, judging the face information according to the model fitted by the metric value.
The specific flow of the face recognition of the invention is as follows: the robot is powered on, a system platform IoT (IoT platform for short) of the robot is connected with an oceanconnect platform of the robot, and an LED touch screen of the robot displays 'status' after the connection is finished "
At this time, the face detection can be performed through the camera, and the face recognition can be completed (flashing red light for 1000 ms). The camera sends a message to the IoT platform to indicate that the match was successful. After the IoT platform receives the instruction, the LED touch screen displays unlock, a green LED indicator lamp of the LED touch screen is on (simulates a door lock), data is reported to the Oceanconnect platform, the lock is automatically locked after the lock is unlocked for 5s, the LED touch screen displays lock, the green LED indicator lamp of the LED touch screen is off, the data is reported to the Oceanconnect platform, a background of the Oceanonnect platform can send an instruction to the IoT platform after receiving the data, a matching result is obtained to determine whether the child is in a supervision range, and the information whether the child is in a lack or not is returned.
The same or similar reference numerals correspond to the same or similar parts;
the terms describing positional relationships in the drawings are for illustrative purposes only and are not to be construed as limiting the patent;
it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. Multi-functional healthy science popularization robot, its characterized in that includes: the system comprises an external input module, an internal storage and database module and a data interface module, wherein the external input module is connected with the internal storage and database module through the data interface module, the external input module is used for inputting interaction information of a user, the internal storage and database module is used for storing video resources, a K-V service database and a face information database, and the data interface module is used for identifying an interface for obtaining a classification result by user input and transmitting a search result to an output end.
2. The multifunctional health science popularization robot according to claim 1, wherein the external input module comprises: camera, infrared module, microphone, touch acquisition module, LED touch-sensitive screen, external equipment interface, switch, power source, net gape, action wheel, the camera is used for acquireing face image and scene image, infrared module is used for the range finding, the microphone is used for gathering voice input signal, touch acquisition module is used for fingerprint collection and fingerprint verification, the LED touch-sensitive screen is used for user's interactive input and inquiry result output, the external equipment interface is used for connecting external equipment, switch 9 is used for controlling the break-make of robot power, power source is used for the robot power supply, the net gape is used for internet access, the action wheel is used for the removal of robot.
3. The multifunctional health science popularization robot according to claim 1, wherein the video resource is a preset health science popularization video, the face recognition database is a dictionary storage structure based on key information matching and an input face information database, and the K-V service database is preset with problem categories and corresponding solving methods.
4. The multi-functional health science popularization robot of claim 2 wherein a user may access video resources through an LED touch screen or an external device interface.
5. The multifunctional health science popularization robot according to claim 2, wherein the external device interface connected with the external device comprises: a keyboard and a mouse.
6. The multifunctional health science popularization robot according to claim 2, wherein the external input module further comprises a wireless communication module.
7. The multifunctional health science popularization robot according to claim 1, wherein the data interface module comprises an identification input interface, the identification method of the identification input interface is determined according to the input type of a user, and when the input selected by the user through the touch of the LED touch screen can be directly searched, the LED touch screen directly provides preset options to be selected;
and when the user inputs the data through characters or voice, classifying the user input through a preset machine learning model, searching a corresponding result in the K-V service database and returning the result.
8. The multifunctional health science popularization robot of claim 7, wherein the input of the user is obtained by a machine learning model to match the value of the Key end of the database, i.e. the classification required by the user, and then the corresponding result is returned to the output end.
9. The multi-functional health science popularization robot of claim 1, wherein the machine learning model construction process comprises: the method comprises the steps of utilizing the existing natural language corpus and a voice database to conduct data analysis and extract primitive language features, utilizing a machine learning algorithm to conduct statistical modeling on the extracted primitive language features, and then integrating the model on a microprocessor of a robot.
10. The multifunctional health science popularization robot according to claim 1, wherein the face database is used for training face image data by extracting LBP characteristic information of face images to realize a face recognition function.
CN202011013310.9A 2020-09-24 2020-09-24 Multifunctional health science popularization robot Pending CN112388643A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011013310.9A CN112388643A (en) 2020-09-24 2020-09-24 Multifunctional health science popularization robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011013310.9A CN112388643A (en) 2020-09-24 2020-09-24 Multifunctional health science popularization robot

Publications (1)

Publication Number Publication Date
CN112388643A true CN112388643A (en) 2021-02-23

Family

ID=74595707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011013310.9A Pending CN112388643A (en) 2020-09-24 2020-09-24 Multifunctional health science popularization robot

Country Status (1)

Country Link
CN (1) CN112388643A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075291A (en) * 2006-05-18 2007-11-21 中国科学院自动化研究所 Efficient promoting exercising method for discriminating human face
CN102306234A (en) * 2011-06-30 2012-01-04 南京信息工程大学 Meteorological popular science robot system
CN103996155A (en) * 2014-04-16 2014-08-20 深圳市易特科信息技术有限公司 Intelligent interaction and psychological comfort robot service system
CN106339602A (en) * 2016-08-26 2017-01-18 丁腊春 Intelligent consulting robot
CN109087709A (en) * 2018-09-20 2018-12-25 广东工业大学 A kind of intelligent health protection robot
CN109243621A (en) * 2018-07-24 2019-01-18 上海常仁信息科技有限公司 A kind of health robot service system
WO2020112147A1 (en) * 2018-11-30 2020-06-04 National Cheng Kung University Method of an interactive health status assessment and system thereof
CN111694939A (en) * 2020-04-28 2020-09-22 平安科技(深圳)有限公司 Method, device and equipment for intelligently calling robot and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101075291A (en) * 2006-05-18 2007-11-21 中国科学院自动化研究所 Efficient promoting exercising method for discriminating human face
CN102306234A (en) * 2011-06-30 2012-01-04 南京信息工程大学 Meteorological popular science robot system
CN103996155A (en) * 2014-04-16 2014-08-20 深圳市易特科信息技术有限公司 Intelligent interaction and psychological comfort robot service system
CN106339602A (en) * 2016-08-26 2017-01-18 丁腊春 Intelligent consulting robot
CN109243621A (en) * 2018-07-24 2019-01-18 上海常仁信息科技有限公司 A kind of health robot service system
CN109087709A (en) * 2018-09-20 2018-12-25 广东工业大学 A kind of intelligent health protection robot
WO2020112147A1 (en) * 2018-11-30 2020-06-04 National Cheng Kung University Method of an interactive health status assessment and system thereof
CN111694939A (en) * 2020-04-28 2020-09-22 平安科技(深圳)有限公司 Method, device and equipment for intelligently calling robot and storage medium

Similar Documents

Publication Publication Date Title
Fang et al. Deepasl: Enabling ubiquitous and non-intrusive word and sentence-level sign language translation
CN106875941B (en) Voice semantic recognition method of service robot
CN110457689B (en) Semantic processing method and related device
CN106933807A (en) Memorandum event-prompting method and system
Cheng et al. Cross-modality compensation convolutional neural networks for RGB-D action recognition
CN106255968A (en) Natural language picture search
CN106097835B (en) Deaf-mute communication intelligent auxiliary system and communication method
CN107515900B (en) Intelligent robot and event memo system and method thereof
CN108009490A (en) A kind of determination methods of chat robots system based on identification mood and the system
CN101674363A (en) Mobile equipment and talking method
CN207458054U (en) Intelligent translation machine
CN111126280B (en) Gesture recognition fusion-based aphasia patient auxiliary rehabilitation training system and method
CN105787442A (en) Visual interaction based wearable auxiliary system for people with visual impairment, and application method thereof
CN110110228A (en) Intelligent real-time professional literature recommendation method and system based on Internet and word bag
CN114255508A (en) OpenPose-based student posture detection analysis and efficiency evaluation method
CN106682449A (en) Skin disease map auxiliary diagnosis system based on cloud database
CN108960171B (en) Method for converting gesture recognition into identity recognition based on feature transfer learning
CN201397512Y (en) Embedded-type infrared human face image recognition device
CN109419609A (en) A kind of intelligent glasses of blind man navigation
CN202584048U (en) Smart mouse based on DSP image location and voice recognition
CN108305629B (en) Scene learning content acquisition method and device, learning equipment and storage medium
CN109686453A (en) Medical aid decision-making system based on big data
Islam et al. Improving real-time hand gesture recognition system for translation: Sensor development
CN111582039B (en) Sign language recognition and conversion system and method based on deep learning and big data
CN112388643A (en) Multifunctional health science popularization robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210223