CN111515957A - Desktop robot system and desktop robot for sitting posture detection and language chat - Google Patents

Desktop robot system and desktop robot for sitting posture detection and language chat Download PDF

Info

Publication number
CN111515957A
CN111515957A CN202010401275.1A CN202010401275A CN111515957A CN 111515957 A CN111515957 A CN 111515957A CN 202010401275 A CN202010401275 A CN 202010401275A CN 111515957 A CN111515957 A CN 111515957A
Authority
CN
China
Prior art keywords
processing module
sitting posture
camera
microphone
posture detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010401275.1A
Other languages
Chinese (zh)
Inventor
陈法明
林家辉
江浩东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinan University
University of Jinan
Original Assignee
Jinan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinan University filed Critical Jinan University
Priority to CN202010401275.1A priority Critical patent/CN111515957A/en
Publication of CN111515957A publication Critical patent/CN111515957A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Abstract

The invention discloses a desktop robot system and a desktop robot for sitting posture detection and language chat, wherein the system comprises a camera, a loudspeaker for playing voice, a microphone for voice input and a processing module, wherein the camera, the loudspeaker and the microphone are respectively connected with the processing module, the camera is used for shooting a human face and sending a shot human face image to the processing module; the processing module is used for identifying whether a human face exists in the human face image, detecting a sitting posture according to the position of the human face, generating a corresponding reminding signal under the condition of improper sitting posture, generating a corresponding voice interaction signal according to voice input by the microphone, and playing the reminding signal and the voice interaction signal through the loudspeaker. The desktop robot system can realize sitting posture detection and language interaction, and is low in cost, convenient to use and free of radiation.

Description

Desktop robot system and desktop robot for sitting posture detection and language chat
Technical Field
The invention relates to the technical field of sitting posture detection, in particular to a desktop robot system and a desktop robot for sitting posture detection and language chat.
Background
The myopia rate of teenagers is becoming the focus of attention. For teenagers, good reading and learning sitting postures have a close relation with the vision health development and the normal bone development of the teenagers, the neck, the chest and the waist of the upper half of a good sitting posture are kept straight, the whole upper half of the body is kept straight all the time, namely the upper half of the body is kept straight in a colloquial way, namely the sitting posture detection equipment is used for helping the teenagers to form good learning sitting postures and protecting the vision and the healthy bone development, and the sitting posture detection equipment is used for carrying out the exercise.
The existing sitting posture detection equipment is mainly based on infrared detection or ultrasonic detection technology, and some non-electronic correction equipment such as a chair type is provided. However, the infrared detection or ultrasonic detection technology has problems of low recognition rate, radiation, etc., and the non-electronic correction device is in direct contact with the human body, which causes poor user experience, and may cause discomfort due to body pressure, and also cannot cope with wrong head posture. And the devices have single function and higher cost.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a desktop robot system for sitting posture detection and language chat, which can realize sitting posture detection and language interaction, and has the advantages of low cost, convenient use and no radiation.
A second object of the present invention is to provide a desktop robot for sitting posture detection and language chat.
The first purpose of the invention is realized by the following technical scheme: a desktop robotic system for sitting posture detection and language chat, comprising: the camera, the loudspeaker used for playing voice, the microphone used for voice input and the processing module are respectively connected with the camera, the loudspeaker and the microphone, wherein,
the camera is used for shooting a human face and sending a shot human face image to the processing module;
the processing module is used for identifying whether a human face exists in the human face image, detecting a sitting posture according to the position of the human face, generating a corresponding reminding signal under the condition of improper sitting posture, generating a corresponding voice interaction signal according to voice input by the microphone, and playing the reminding signal and the voice interaction signal through the loudspeaker.
Preferably, the processing module is a raspberry pi 3B development board, the microphone is a raspberry pi USB drive-free microphone, and the camera, the microphone and the speaker are respectively connected to a camera interface, a microphone input interface and an audio output interface on the raspberry pi 3B development board.
Furthermore, the camera adopts a raspberry group in-line camera, and the raspberry group in-line camera is connected with a camera interface on the raspberry group 3B development board through a flexible flat cable.
Preferably, the processing module uses haarcascade _ front _ default.xml of OpenCV as a face detection classifier to identify whether a face exists in the face image and detect a sitting posture according to the face position.
Preferably, the processing module is connected with the voice recognition open platform through an API (application programming interface) interface of the network access voice recognition open platform, is connected with the Turing robot open platform through an API interface of the network access Turing robot open platform, and is connected with the voice synthesis open platform through an API interface of the network access voice synthesis open platform;
the voice recognition open platform is used for converting the audio input by the microphone into a text accessed to the open platform of the Turing robot and returning the text to the processing module; after the processing module sends the text to the Turing robot open platform, the Turing robot open platform is used for generating a corresponding answer text from the accessed text and returning the answer text to the processing module; when the processing module sends the callback text to the voice synthesis open platform, the voice synthesis open platform is used for converting the callback text into output audio and returning the output audio to the processing module; the processing module reads the output audio through the pygame module and plays the output audio outwards through the loudspeaker.
Preferably, the camera is placed in front of the user and forms an upward 45-degree included angle with the placed desktop, and the face of the user is located in a central upper area in the picture of the camera.
Preferably, the system further comprises a photosensitive module, the photosensitive module is connected with the processing module and used for comparing the ambient brightness detected by the photosensitive module with a preset brightness threshold value and sending a comparison result to the processing module under the condition that the ambient brightness is lower than the brightness threshold value; and the processing module sends a reminding signal of too dark environment to the loudspeaker after receiving the comparison result, and plays the reminding signal through the loudspeaker.
Preferably, the system is still including the position status indicator lamp of connecting processing module, the recording pilot lamp, position of sitting detects pilot lamp and capacitive switch subassembly, processing module is used for placing the control position status indicator lamp under the correct condition at the camera and becomes bright, the capacitive switch subassembly includes position of sitting affirmation switch, recording switch and reset switch, position of sitting affirmation switch is used for triggering processing module and carries out the position of sitting detection and control position of sitting detection pilot lamp's bright going out, the recording switch is used for controlling the microphone, opening or closing of speaker and the bright going out of control recording pilot lamp, it carries out the mutual processing of pronunciation to trigger processing module, reset switch is used for restarting entire system.
The second purpose of the invention is realized by the following technical scheme: a desktop robot for sitting posture detection and language chat, the desktop robot is provided with a shell and a desktop robot system for sitting posture detection and language chat, wherein the desktop robot system is arranged on the desktop robot shell;
the shell is provided with a camera mounting hole, a photosensitive module mounting hole, a capacitance switch component mounting hole, a position state indicator lamp mounting hole, a sitting posture detection indicator lamp mounting hole, a recording indicator lamp mounting hole, a sound outlet and a microphone opening, wherein,
the processing module, the loudspeaker and the microphone are accommodated in the shell, the loudspeaker is installed at a position adjacent to the sound outlet, and the microphone is installed at a position adjacent to the microphone opening; photosensitive module installs in photosensitive module mounting hole and shows out in the shell, the camera is installed in the camera mounting hole and shows out in the shell, position status indicator lamp is installed in position status indicator lamp mounting hole and shows out in the shell, the recording pilot lamp is installed in the recording pilot lamp mounting hole and shows out in the shell, the position of sitting detects the pilot lamp and installs in the position of sitting detects the pilot lamp mounting hole and shows out in the shell, the capacitance switch subassembly is installed in capacitance switch subassembly mounting hole, and its switch button protrusion in shell surface.
Compared with the prior art, the invention has the following advantages and effects:
(1) the invention relates to a desktop robot system for sitting posture detection and language chat, which comprises a camera, a loudspeaker for playing voice, a microphone for voice input and a processing module, wherein the camera, the loudspeaker and the microphone are respectively connected with the processing module, the camera is used for shooting a human face and sending a shot human face image to the processing module; the processing module is used for identifying whether a human face exists in the human face image, detecting a sitting posture according to the position of the human face, generating a corresponding reminding signal under the condition of improper sitting posture, generating a corresponding voice interaction signal according to voice input by the microphone, and playing the reminding signal and the voice interaction signal through the loudspeaker. The system can realize non-contact sitting posture detection and voice interaction, is also provided with the photosensitive module to detect the ambient brightness, and has the advantages of various functions, no infrared radiation and good user experience.
(2) The system of the invention adopts the processor on the raspberry pi 3B development board as a processing module, the calculation speed is high and the overall cost of the system is lower.
(3) The system adopts OpenCV haarcascade _ front face _ default.xml as a face detection classifier to identify whether a face exists in a face image and detect a sitting posture according to the face position, so that the head position can be detected, and the detection accuracy is high. Moreover, the processing module realizes voice conversion by connecting the existing voice recognition open platform, voice synthesis open platform and the existing intelligent robot open platform, and the real-time performance and the adaptability of voice chat are better.
(4) The desktop robot can be placed on a desktop, the camera is placed in front of a user and forms an upward 45-degree included angle with the placed desktop, the face of the user is located in a middle upper area in a camera picture, sitting posture detection can be started by pressing the sitting posture confirming switch, voice interaction can be started by pressing the recording switch, and the desktop robot is small in size, simple in structure and very convenient to use.
Drawings
Fig. 1 is a block diagram of a desktop robot system for sitting posture detection and language chat according to the present invention.
Fig. 2 is a schematic diagram of a desktop robot for sitting posture detection and language chat in accordance with the present invention.
Fig. 3 is a workflow diagram of the desktop robotic system of fig. 1.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but the present invention is not limited thereto.
Examples
The embodiment discloses a desktop robot system for sitting posture detection and language chat, as shown in fig. 1, including: the device comprises a camera, a loudspeaker for playing voice, a microphone for voice input, a processing module, a photosensitive module, a position state indicator lamp, a recording indicator lamp, a sitting posture detection indicator lamp and a capacitance switch assembly. The camera, the loudspeaker, the microphone, the photosensitive module, the position state indicator lamp, the recording indicator lamp, the sitting posture detection indicator lamp and the capacitance switch assembly are respectively connected with the processing module.
The processing module is used for identifying whether a human face exists in the human face image, detecting a sitting posture according to the position of the human face, generating a corresponding reminding signal under the condition of improper sitting posture, generating a corresponding voice interaction signal according to voice input by the microphone, and playing the reminding signal and the voice interaction signal through the loudspeaker.
For the realization of sitting posture detection:
specifically, the camera is used for shooting a human face and sending a shot human face image to the processing module. The resolution of the face image collected by the camera is 640x 480.
And the processing module takes OpenCV self-contained haarcascade _ frontage _ default.xml as a face detection classifier to identify whether a face exists in the face image or not and detect a sitting posture according to the face position. OpenCV is a cross-platform computer vision library licensed based on BSD and can run on Linux, Windows, Android, and Mac OS operating systems. The method is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python, Ruby, MATLAB and the like, and realizes a plurality of universal and powerful algorithms in the aspects of image processing and computer vision.
And the processing module observes and records the coordinates (x, y), the width w and the height h of a rectangular frame which frames the face in the image picture through the camera. Because the head height reaches the highest when the sitting posture is correct and the size of the face in the picture is relatively fixed, the processing module can reliably determine the sitting posture by comparing the return values of the face detection in different sitting postures, namely the return values of faces () with real-time x, y, w and h parameters and finding out the specific pairing of the sitting posture and the parameter change. And combining the predefined coordinates of the rectangular frame under different sitting postures and the width and height thresholds, so as to accurately obtain a sitting posture detection result.
In this embodiment, the processing module is a raspberry pi 3B development board. The raspberry pi 3B development board is provided with a raspberry pi GPIO circuit which takes a BCM2835 processor as a control center, and a plurality of IO ports, a camera interface, a microphone input interface, an audio output interface and a network interface.
The microphone is a raspberry group USB drive-free microphone, the camera adopts a raspberry group in-line camera, and the raspberry group in-line camera is connected with a camera interface on a raspberry group 3B development board through a flexible flat cable. The microphone and the loudspeaker are respectively connected with a microphone input interface and an audio output interface on the raspberry pi 3B development board.
The raspberry pi 3B development board uses Python as a development language, can be connected with the voice recognition open platform by accessing an API interface of the voice recognition open platform through a network, can be connected with the telepresence robot open platform by accessing an API interface of the telepresence robot open platform through a network, and can be connected with the voice synthesis open platform by accessing an API interface of the voice synthesis open platform through a network.
For the implementation of language chat:
when the audio is input through the microphone, the voice recognition open platform is used for converting the audio input by the microphone into a text accessed to the turing robot open platform and then returning the text to the development board, the development board sends the text to the turing robot open platform, the turing robot open platform is used for generating a corresponding callback text from the accessed text and then returning the callback text to the development board, the development board sends the callback text to the voice synthesis open platform, the voice synthesis open platform is used for converting the callback text into output audio and returning the output audio to the development board, the raspberry 3B development board reads the output audio through a pygame module of Python and plays the output audio outwards through a loudspeaker, and therefore the simple voice interaction function is achieved.
In the embodiment, a hundred-degree voice recognition platform and a hundred-degree voice synthesis platform are adopted, the hundred-degree voice recognition platform uploads a required complete recording file, and the duration of the recording file does not exceed 60 s. The Turing robot open platform is an intelligent chat robot open platform under the unlimited science and technology flag of Beijing Opuntian. The open platform of the Turing robot opens the function of a custom knowledge base based on NLP technology. The pygame module is a module of Python and has the advantage of high response speed.
Photosensitive module's VCC, GND, D0 mouth is connected with 3 IO mouths of raspberry group 3B development board respectively, photosensitive module is used for making the ambient brightness that photosensitive module detected and preset luminance threshold value and compares, under the condition that is less than the luminance threshold value, D0 mouth level changes, for example this embodiment D0 mouth becomes the high level, processing module sends the too dark warning signal of environment to the speaker after receiving the level change of D0 mouth is the high level, and play warning signal through the speaker. In other embodiments, an LED driving module and an LED lamp that can be triggered according to the level change of the IO port on the development board may be further provided to achieve automatic lighting.
The embodiment also discloses a desktop robot for sitting posture detection and language chat, and as shown in fig. 2, the desktop robot is provided with a shell 9 and the desktop robot system, and the desktop robot system is installed on the shell. The desktop robot can be placed on a desktop to realize sitting posture detection and voice interaction.
The shell is provided with a camera mounting hole, a photosensitive module mounting hole, a capacitance switch assembly mounting hole, a position state indicator lamp mounting hole, a sitting posture detection indicator lamp mounting hole, a recording indicator lamp mounting hole, a sound outlet 7 and a microphone opening 8.
The processing module, the loudspeaker and the microphone are contained in the shell, and the installation position of the loudspeaker is adjacent to the sound outlet, so that the voice can be played more clearly. The microphone is arranged adjacent to the microphone opening, so that the voice can be input more clearly. In other embodiments, the microphone may not be disposed in the housing and may be directly led out of the housing, so that the housing does not need to be provided with a corresponding microphone opening.
The photosensitive module 2 is mounted in the photosensitive module mounting hole and exposed from the housing. The camera 1 is installed in the camera installation hole and exposed from the housing. When the desktop robot is used, the camera needs to be placed in front of a user and forms an upward 45-degree included angle with a placed desktop (the angle of the camera is adjusted in advance before installation), and the face of the user is located in a centered upper area in a camera picture.
The position state indicator lamp 3 is arranged in the position state indicator lamp mounting hole and exposed out of the shell, the recording indicator lamp 4 is arranged in the recording indicator lamp mounting hole and exposed out of the shell, and the sitting posture detection indicator lamp 5 is arranged in the sitting posture detection indicator lamp mounting hole and exposed out of the shell. When the camera is correctly placed, the processing module controls the position state indicating lamp to be lightened.
The capacitance switch assembly 6 is arranged in the capacitance switch assembly mounting hole, and a switch key of the capacitance switch assembly protrudes out of the surface of the shell. The capacitance switch assembly comprises a sitting posture confirmation switch X, a recording switch Y and a reset switch Z, wherein the sitting posture confirmation switch is used for triggering the processing module to carry out sitting posture detection and controlling the on and off of the sitting posture detection indicator lamp; the recording switch is used for controlling the on or off of the microphone and the loudspeaker and controlling the on and off of the recording indicator lamp, triggering the processing module to perform voice interaction processing, and when voice chat is needed, pressing a key of the recording switch, and then lighting the recording indicator lamp, so that voice input can be started; when the system needs to be restarted, the system can be restarted only by pressing the key of the reset switch.
As shown in fig. 3, the working process of the desktop robot/desktop robot system of the present embodiment is as follows:
(1) for sitting posture detection:
first, the system initializes: setting the resolution of a camera picture 640x480, initializing faces () parameters, and setting a raspberry GPIO circuit;
the method comprises the steps that a face image is collected through a camera, whether a face exists in the face image or not is judged through a processing module, whether the face is close to the middle position of a picture or not is judged, and when the face is judged and the distance between the face and a lower boundary exceeds 100 pixels, a position state indicator lamp is controlled to be on through the processing module;
the light intensity of the environment is detected through the photosensitive module, and an environment too dark prompt is sent to the loudspeaker through the processing module under the dark condition;
pressing a sitting posture confirmation switch key, enabling a sitting posture detection indicator lamp to be on, judging whether a sitting posture confirmation switch is pressed or not through a processing module, if so, starting to record a face image acquired by a camera, and detecting the position of a face according to the parameters of face () so as to judge the sitting posture; and triggering the loudspeaker to play a corresponding reminding signal under the condition of improper sitting posture (including too close distance, too low head position, right head and left head).
(2) For voice chat:
firstly, initializing a system;
pressing a recording switch key, enabling a recording indicator light to be on, judging whether a recording switch is pressed through a processing module, if so, starting recording and generating an audio file, calling a speech-to-text synthesis API of a Baidu speech recognition platform, and generating a txt text;
the text is transmitted into a character chatting program of the Turing robot platform, and a callback text is obtained through networking of a processing module;
the answer text is read aloud through the Baidu voice of the Baidu voice synthesis platform and is finally played outwards through the player.
The above embodiments are preferred embodiments of the present invention, but the present invention is not limited to the above embodiments, and any other changes, modifications, substitutions, combinations, and simplifications which do not depart from the spirit and principle of the present invention should be construed as equivalents thereof, and all such changes, modifications, substitutions, combinations, and simplifications are intended to be included in the scope of the present invention.

Claims (9)

1. A desktop robotic system for sitting posture detection and language chat, comprising: the camera, the loudspeaker used for playing voice, the microphone used for voice input and the processing module are respectively connected with the camera, the loudspeaker and the microphone, wherein,
the camera is used for shooting a human face and sending a shot human face image to the processing module;
the processing module is used for identifying whether a human face exists in the human face image, detecting a sitting posture according to the position of the human face, generating a corresponding reminding signal under the condition of improper sitting posture, generating a corresponding voice interaction signal according to voice input by the microphone, and playing the reminding signal and the voice interaction signal through the loudspeaker.
2. The desktop robot system for sitting posture detection and language chat of claim 1, wherein the processing module is a raspberry pi 3B development board, the microphone is a raspberry pi USB drive-free microphone, and the camera, the microphone and the speaker are respectively connected to a camera interface, a microphone input interface and an audio output interface on the raspberry pi 3B development board.
3. The desktop robot system for sitting posture detection and language chat of claim 2, wherein the camera is a raspberry pi in-line camera connected to the camera interface on the raspberry pi 3B development board by a flex cable.
4. The desktop robot system for sitting posture detection and language chat of claim 1, wherein the processing module uses the haarcascade front face default xml of OpenCV as a face detection classifier to identify whether a face exists in the face image and detect the sitting posture according to the face position.
5. The desktop robot system for sitting posture detection and language chat of claim 1, wherein the processing module is connected to the voice recognition open platform by accessing an API interface of the voice recognition open platform through a network, connected to the turing robot open platform by accessing an API interface of the turing robot open platform through a network, and connected to the speech synthesis open platform by accessing an API interface of the speech synthesis open platform through a network;
the voice recognition open platform is used for converting the audio input by the microphone into a text accessed to the open platform of the Turing robot and returning the text to the processing module; after the processing module sends the text to the Turing robot open platform, the Turing robot open platform is used for generating a corresponding answer text from the accessed text and returning the answer text to the processing module; when the processing module sends the callback text to the voice synthesis open platform, the voice synthesis open platform is used for converting the callback text into output audio and returning the output audio to the processing module; the processing module reads the output audio through the pygame module and plays the output audio outwards through the loudspeaker.
6. The desktop robotic system for sitting posture detection and language chat of claim 1, wherein the camera is placed in front of the user and forms an upward 45 degree angle with the placed desktop, and the face of the person is located in a centered upper region in the camera view.
7. The desktop robot system for sitting posture detection and language chat as claimed in claim 1, further comprising a photosensitive module connected to the processing module for comparing the ambient brightness detected by the photosensitive module with a preset brightness threshold and sending the comparison result to the processing module if the ambient brightness is lower than the brightness threshold; and the processing module sends a reminding signal of too dark environment to the loudspeaker after receiving the comparison result, and plays the reminding signal through the loudspeaker.
8. The desktop robot system for sitting posture detection and language chat according to claim 1, further comprising a position status indicator light, a recording indicator light, a sitting posture detection indicator light and a capacitive switch assembly connected to the processing module, wherein the processing module is used for controlling the position status indicator light to turn on when the camera is correctly placed, the capacitive switch assembly comprises a sitting posture confirmation switch, a recording switch and a reset switch, the sitting posture confirmation switch is used for triggering the processing module to perform sitting posture detection and controlling the on and off of the sitting posture detection indicator light, the recording switch is used for controlling the on or off of the microphone and the speaker and controlling the on and off of the recording indicator light, the processing module is triggered to perform voice interaction processing, and the reset switch is used for restarting the whole system.
9. A desktop robot for sitting posture detection and language chat, the desktop robot having a housing and a desktop robot system for sitting posture detection and language chat as claimed in any one of claims 1 to 8, the desktop robot system being mounted to the desktop robot housing;
the shell is provided with a camera mounting hole, a photosensitive module mounting hole, a capacitance switch component mounting hole, a position state indicator lamp mounting hole, a sitting posture detection indicator lamp mounting hole, a recording indicator lamp mounting hole, a sound outlet and a microphone opening, wherein,
the processing module, the loudspeaker and the microphone are accommodated in the shell, the loudspeaker is installed at a position adjacent to the sound outlet, and the microphone is installed at a position adjacent to the microphone opening; photosensitive module installs in photosensitive module mounting hole and shows out in the shell, the camera is installed in the camera mounting hole and shows out in the shell, position status indicator lamp is installed in position status indicator lamp mounting hole and shows out in the shell, the recording pilot lamp is installed in the recording pilot lamp mounting hole and shows out in the shell, the position of sitting detects the pilot lamp and installs in the position of sitting detects the pilot lamp mounting hole and shows out in the shell, the capacitance switch subassembly is installed in capacitance switch subassembly mounting hole, and its switch button protrusion in shell surface.
CN202010401275.1A 2020-05-13 2020-05-13 Desktop robot system and desktop robot for sitting posture detection and language chat Pending CN111515957A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010401275.1A CN111515957A (en) 2020-05-13 2020-05-13 Desktop robot system and desktop robot for sitting posture detection and language chat

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010401275.1A CN111515957A (en) 2020-05-13 2020-05-13 Desktop robot system and desktop robot for sitting posture detection and language chat

Publications (1)

Publication Number Publication Date
CN111515957A true CN111515957A (en) 2020-08-11

Family

ID=71907426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010401275.1A Pending CN111515957A (en) 2020-05-13 2020-05-13 Desktop robot system and desktop robot for sitting posture detection and language chat

Country Status (1)

Country Link
CN (1) CN111515957A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170326726A1 (en) * 2014-10-02 2017-11-16 Brain Corporation Apparatus and methods for training path navigation by robots
CN108281143A (en) * 2018-02-24 2018-07-13 姚诗晴 A kind of student's daily schedule intelligence management and control robot based on machine vision and interactive voice
CN108416268A (en) * 2018-02-02 2018-08-17 华侨大学 A kind of action identification method based on dual robot Visual Communication
CN109948435A (en) * 2019-01-31 2019-06-28 深圳奥比中光科技有限公司 Sitting posture prompting method and device
CN109960154A (en) * 2019-02-21 2019-07-02 深圳市致善教育科技有限公司 A kind of operating method and its management system based on artificial intelligence study cabin
CN110275987A (en) * 2019-05-09 2019-09-24 威比网络科技(上海)有限公司 Intelligent tutoring consultant generation method, system, equipment and storage medium
CN110405794A (en) * 2019-08-28 2019-11-05 重庆科技学院 It is a kind of to embrace robot and its control method for children
CN110495711A (en) * 2019-09-12 2019-11-26 中南林业科技大学 A kind of intelligent desk and control method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170326726A1 (en) * 2014-10-02 2017-11-16 Brain Corporation Apparatus and methods for training path navigation by robots
CN108416268A (en) * 2018-02-02 2018-08-17 华侨大学 A kind of action identification method based on dual robot Visual Communication
CN108281143A (en) * 2018-02-24 2018-07-13 姚诗晴 A kind of student's daily schedule intelligence management and control robot based on machine vision and interactive voice
CN109948435A (en) * 2019-01-31 2019-06-28 深圳奥比中光科技有限公司 Sitting posture prompting method and device
CN109960154A (en) * 2019-02-21 2019-07-02 深圳市致善教育科技有限公司 A kind of operating method and its management system based on artificial intelligence study cabin
CN110275987A (en) * 2019-05-09 2019-09-24 威比网络科技(上海)有限公司 Intelligent tutoring consultant generation method, system, equipment and storage medium
CN110405794A (en) * 2019-08-28 2019-11-05 重庆科技学院 It is a kind of to embrace robot and its control method for children
CN110495711A (en) * 2019-09-12 2019-11-26 中南林业科技大学 A kind of intelligent desk and control method

Similar Documents

Publication Publication Date Title
CN110291489B (en) Computationally efficient human identification intelligent assistant computer
US5704836A (en) Motion-based command generation technology
RU2336560C2 (en) Dialogue control for electric device
US6393136B1 (en) Method and apparatus for determining eye contact
CN105009202B (en) It is divided into two-part speech recognition
EP1503368B1 (en) Head mounted multi-sensory audio input system
CN109120790B (en) Call control method and device, storage medium and wearable device
CN104094590A (en) Method and apparatus for unattended image capture
US20080289002A1 (en) Method and a System for Communication Between a User and a System
CN103765879A (en) Method to extend laser depth map range
JPH11327753A (en) Control method and program recording medium
CN111935573B (en) Audio enhancement method and device, storage medium and wearable device
CN110251070B (en) Eye health condition monitoring method and system
KR20180132989A (en) Attention-based rendering and fidelity
CN114779922A (en) Control method for teaching apparatus, control apparatus, teaching system, and storage medium
TW202008115A (en) Interaction method and device
CN110309693B (en) Multi-level state detection system and method
CN109036410A (en) Audio recognition method, device, storage medium and terminal
JP2024023681A (en) Mobile terminal, video image acquisition method, and video image acquisition program
US20230239800A1 (en) Voice Wake-Up Method, Electronic Device, Wearable Device, and System
WO2021196989A1 (en) Sleep state determination method and system, wearable device, and storage medium
CN111515957A (en) Desktop robot system and desktop robot for sitting posture detection and language chat
WO2020021861A1 (en) Information processing device, information processing system, information processing method, and information processing program
KR20090070325A (en) Emergency calling system and method based on multimodal information
JP2019220145A (en) Operation terminal, voice input method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200811

WD01 Invention patent application deemed withdrawn after publication