CN111538344A - Intelligent wheelchair based on face key point motion following and control method thereof - Google Patents

Intelligent wheelchair based on face key point motion following and control method thereof Download PDF

Info

Publication number
CN111538344A
CN111538344A CN202010407075.7A CN202010407075A CN111538344A CN 111538344 A CN111538344 A CN 111538344A CN 202010407075 A CN202010407075 A CN 202010407075A CN 111538344 A CN111538344 A CN 111538344A
Authority
CN
China
Prior art keywords
pin
wheelchair
main control
signal conversion
facial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010407075.7A
Other languages
Chinese (zh)
Inventor
向毅
陈维曦
张一帆
陆李阳
吴英
黄永林
向秋芳
周鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Science and Technology
Original Assignee
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Science and Technology filed Critical Chongqing University of Science and Technology
Priority to CN202010407075.7A priority Critical patent/CN111538344A/en
Publication of CN111538344A publication Critical patent/CN111538344A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/04Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs motor-driven
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G5/00Chairs or personal conveyances specially adapted for patients or disabled persons, e.g. wheelchairs
    • A61G5/10Parts, details or accessories
    • A61G5/1051Arrangements for steering
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/04Programme control other than numerical control, i.e. in sequence controllers or logic controllers
    • G05B19/042Programme control other than numerical control, i.e. in sequence controllers or logic controllers using digital processors
    • G05B19/0423Input/output
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • G08C17/02Arrangements for transmitting signals characterised by the use of a wireless electrical link using a radio link
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61GTRANSPORT, PERSONAL CONVEYANCES, OR ACCOMMODATION SPECIALLY ADAPTED FOR PATIENTS OR DISABLED PERSONS; OPERATING TABLES OR CHAIRS; CHAIRS FOR DENTISTRY; FUNERAL DEVICES
    • A61G2203/00General characteristics of devices
    • A61G2203/10General characteristics of devices characterised by specific control means, e.g. for adjustment or steering
    • A61G2203/18General characteristics of devices characterised by specific control means, e.g. for adjustment or steering by patient's head, eyes, facial muscles or voice

Abstract

The invention discloses an intelligent wheelchair based on facial key point motion following and a control method thereof, wherein the intelligent wheelchair comprises a wheelchair body, a facial key point control circuit is arranged on the wheelchair body, and the facial key point control circuit comprises a wireless communication receiving unit, a main control unit, a left wheel motor driving unit and a right wheel motor driving unit; the wheelchair comprises a wheelchair body, and is characterized in that an intelligent terminal is arranged on the wheelchair body, a camera of the intelligent terminal acquires a facial image of a wheelchair user, and a facial signal conversion APP is installed on the intelligent terminal; the face signal conversion APP acquires a face image of a wheelchair user through a camera on the intelligent terminal, acquires the face image to obtain feature points of the face image, converts the feature points into wheelchair control signals and sends the wheelchair control signals to the wheelchair; the main control unit on the wheelchair receives and processes the wheelchair control signal, converts the wheelchair control signal into instructions such as left-turn or right-turn, or back-up or stop, and controls the motor to execute a corresponding control command.

Description

Intelligent wheelchair based on face key point motion following and control method thereof
Technical Field
The invention relates to the technical field of wheelchairs, in particular to an intelligent wheelchair based on face key point motion following and a control method thereof.
Background
The wheelchairs in the market at present mainly adopt common hand-push type and electric remote control rods, and the wheelchairs have the defect that the wheelchairs are only suitable for being used with low disability degree or with family carers. Aiming at the problem that the disabled with a heavy disability degree or without accompanying can not use a common wheelchair to realize autonomous action.
Disclosure of Invention
In order to solve the problem that the disabled cannot use a common wheelchair to realize autonomous action, the invention provides an intelligent wheelchair based on facial key point motion following and a control method thereof, wherein the wheelchair is autonomously controlled through facial actions of the disabled, so that life self-care is realized.
The technical scheme is as follows:
the utility model provides an intelligence wheelchair based on facial key point motion is followed, includes the wheelchair body, and its key lies in, be provided with facial key point control circuit on the wheelchair body, facial key point control circuit includes wireless communication receiving element, main control unit, left wheel motor drive unit and right wheel motor drive unit.
The signal input end of the wireless communication receiving unit serves as the signal receiving end of the wheelchair body to receive the wheelchair control signal, the output end of the wireless communication receiving unit is connected with the control signal input end of the main control unit, and the signal output end of the main control unit is respectively connected with the signal input end of the left wheel motor driving unit and the signal input end of the right wheel motor driving unit.
By adopting the design, the wheelchair receives the wheelchair control signal through the wireless communication receiving unit in the facial key point control circuit, and the wheelchair control signal is processed by the main control unit, so that the wheelchair can perform corresponding actions according to the wheelchair control signal.
Further, the master control unit adopts a master control chip U1 with model number STM32F103VET6, a fifty-fifth pin, a seventy-fifth pin, a hundred-th pin, a twenty-eighth pin and an eleventh pin of the master control chip U1 are interconnected and connected with a power supply VP, a fifty-fifth pin of the master control chip U1 is grounded through a twelfth capacitor C12, a thirteenth capacitor C13, a fourteenth capacitor C14, a fifteenth capacitor C15, a sixteenth capacitor C16 and a seventeenth capacitor C17 respectively, a sixth pin of the master control chip U1 is connected with a fifty pin of the master control chip U1 through a fifth resistor R5, a twenty-second pin of the master control chip U1 is grounded through a first bead FB1 and a nineteenth capacitor C19 in sequence, a common power supply of the first FB1 and the nineteenth capacitor C19 is connected with the VP, a twenty-second pin of the master control chip U1 serves as an output terminal of the power supply VA, a twenty-fourth pin 1 of the power supply VA, the twenty-first pin of the main control chip U1 is connected with a power supply VA through a sixth resistor R6, the twenty-first pin of the main control chip U1 is also connected with the ground through an eleventh capacitor C11, and the forty-ninth pin, the seventy-fourth pin, the ninety-ninth pin, the twenty-seventh pin, the tenth pin and the nineteenth pin of the main control chip U1 are connected with each other and connected with the ground.
The eighth pin of the main control chip U1 is grounded through an eighth capacitor C8, the ninth pin of the main control chip U1 is grounded through a ninth capacitor C9, the eighth pin of the main control chip U1 is further grounded through a first crystal oscillator X1 to the ninth pin of the main control chip U1, the fifty-fifth pin of the main control chip U1 is grounded through a third resistor R3 to the anode of a third diode D3, the cathode of the three diode D3 is grounded, the fifty-sixth pin of the main control chip U1 is grounded through a fourth resistor R4 to the anode of a fourth diode D4, the cathode of the four diode D4642 is grounded, the twelfth pin of the main control chip U1 is grounded through a tenth capacitor C9, the thirteenth pin of the main control chip U1 is grounded through an eighth resistor R8 and an eighteenth capacitor C18 in turn, the twelfth pin of the main control chip U1 is grounded through a second crystal oscillator X2, a seventh resistor R7 and an eighteenth resistor R18 and an eighteenth capacitor R8, the fourteenth pin of the main control chip U1 is connected to the power supply VP through a tenth resistor R10, the fourteenth pin of the main control chip U1 is grounded through a twenty-second capacitor C22, and the ninety-fourth pin of the main control chip U1 is grounded through a ninth resistor R9.
The ninety-fifth pin, the ninety-sixth pin, the forty-seventh pin and the forty-eighth pin of the main control chip U1 are used as signal output ends of a left wheel motor driving unit to be connected with the left wheel motor driving unit, the fifty-first pin, the fifty-second pin, the fifty-third pin and the fifty-fourth pin of the main control chip U1 are used as signal output ends of a right wheel motor driving unit to be connected with the right wheel motor driving unit, and the sixty-eight pin and the sixty-nine pin of the main control chip U1 are used as input ends of wireless communication control signals to be connected with the wireless communication receiving unit.
By adopting the scheme, the main control unit on the wheelchair receives and processes the wheelchair control signal, converts and outputs the wheelchair control signal into instructions such as left-turn or right-turn, or back-up or stop and the like, and controls the motor to execute a corresponding control command.
Still further described, the facial keypoint control circuit further comprises an overspeed protection unit and a power supply unit, the power supply unit supplies power to the main control unit and the overspeed protection unit.
The power supply unit adopts a voltage drop stabilizer U2 with model number AMS1117-3.3, the input end of the voltage drop stabilizer U1 is respectively grounded through a first capacitor C1 and a second capacitor C2, the input end of the voltage drop stabilizer U1 is also connected with the source electrode of a first MOS tube Q1, the grid electrode of the first MOS tube Q1 is grounded through a second resistor R2, the drain electrode of the first MOS tube Q1 is connected with the power supply end of a battery through a first fuse, the drain electrode of the first MOS tube Q1 is grounded through a fifth capacitor C5, the drain electrode of the first MOS tube Q1 is connected with the cathode electrode of a first voltage stabilizing diode D1, the anode electrode of the first voltage stabilizing diode D1 is grounded, the drain electrode of the first MOS tube Q1 is also used as the output end of a power supply VB, the output end of the voltage drop U1 is respectively grounded through a third capacitor C3 and a fourth capacitor C4, the output end of the voltage drop stabilizer U1 is connected with the anode electrode of a second diode D2 through a first resistor R1, the cathode of the second diode D2 is grounded, and the output terminal of the voltage drop regulator U1 serves as the output terminal of the power supply VP.
And the twenty-second pin and the twenty-third pin of the main control chip U1 are used as overspeed protection signal input ends to be connected with the overspeed protection unit.
By adopting the scheme, the power supply unit transforms the voltage of an external power supply or a battery power supply into the voltage suitable for the main control unit and the overspeed protection unit.
A control method of an intelligent wheelchair based on face key point motion following is characterized in that: pretreatment: installing an intelligent terminal with a camera on the wheelchair, so that the intelligent terminal can acquire facial images of a user; the intelligent terminal is provided with a facial signal conversion APP, the facial signal conversion APP comprises a model calling module, a control signal conversion module and a wireless communication module, and wheelchair actions corresponding to each type of facial images are set.
S1: facial signal conversion APP gathers current wheelchair user's facial image through intelligent terminal's camera.
S2: the face signal conversion APP obtains face feature points in the face image through a model calling module.
S3: the face signal conversion APP converts the face feature points into wheelchair control signals through the control signal conversion module.
S4: facial signal conversion APP passes through wireless communication module, sends the signal receiving terminal of wheelchair control signal through intelligent terminal to the wheelchair to control the wheelchair action.
The facial images are divided into four categories: the wheelchair comprises a left-turn facial image, a right-turn facial image, a stop facial image and a back facial image, wherein the left-turn facial image corresponds to the wheelchair turning left, the right-turn facial image corresponds to the wheelchair turning right, the stop facial image corresponds to the wheelchair stopping, and the back facial image corresponds to the wheelchair back.
By adopting the scheme, the intelligent terminal acquires the facial image of the wheelchair user and converts the facial image into the wheelchair control signal according to the characteristic points in the facial image, so that the action intention of the wheelchair user is expressed, and then the wheelchair control signal is sent to the wheelchair through the wireless communication module to control the wheelchair to make left-turn, right-turn, stop, retreat and other actions.
Further, an Opencv library is arranged in the facial signal conversion APP, and the facial signal conversion APP calls a camera of the intelligent terminal through a camera API in the Opencv library to acquire a facial image of a current wheelchair user.
The Opencv Library is fully called an Open source code Computer Vision Library of an Open source Computer Vision Library, and is an API function Library of an Open source code related to Computer Vision.
A TensorFlow frame with a multi-neural network layer is arranged in a model calling module of the face signal conversion APP, and an MTCNN model is arranged on the TensorFlow frame.
And a rotation judgment threshold is set in a control signal conversion module of the face signal conversion APP.
The face signal conversion APP mainly extracts a face image, and a tensrflow framework used in the face signal conversion APP is a prior art and is not discussed here.
The MTCNN model is called a Multi-task convolutional neural network of a Multi-task conditional neural network, and the technology is the prior art and is not discussed here.
By adopting the scheme, the facial signal conversion APP acquires facial images through the camera of the intelligent terminal.
Still further, in step S2, the step of obtaining facial feature points by the facial signal conversion APP via the model calling module is:
s21: a model calling module of the face signal conversion APP receives the collected face image in a picture stream mode, and the face image is subjected to image preprocessing in the MTCNN model.
S22: and the face signal conversion APP obtains face image feature points from the preprocessed face image through a model calling module in the MTCNN model and follows the face image feature points.
The model of facial signal conversion APP calls the module and receives the facial image of collection with the form of picture stream, and wherein, facial signal conversion APP reads the characteristic point in the face image of update in real time, turns into control signal with wheelchair user's facial information.
By adopting the scheme, the face signal conversion APP extracts the feature points from the face image and tracks the face image action intention of the wheelchair user in real time.
Further describing, the image preprocessing of the facial image in the MTCNN model specifically includes: and constructing an image pyramid of the facial image in the MTCNN model, selecting the facial image according to a preset sampling condition, and taking the facial image as a preprocessed facial image.
The constructed image pyramid is a series of pictures with the resolution gradually reduced in pyramid shape arrangement, and the images are transformed in different scales to obtain face images with the same specification, so that the constructed image pyramid is suitable for faces with different sizes.
By adopting the scheme, the MTCNN model of the face signal conversion APP is used for picture preprocessing so as to adapt to face images of different sizes and accurately acquire feature point information.
Further, the obtaining, by the facial signal conversion APP in the MTCNN model, facial image feature points of a preprocessed facial image through the model calling module specifically includes:
s221: and the face signal conversion APP rapidly generates candidate windows from the preprocessed face images through P-Net of three cascade networks in the MTCNN model.
S222: and the face signal conversion APP carries out high-precision candidate window filtering selection on the fast generated candidate windows through the R-Net of three cascade networks in the MTCNN model.
S223: and the face signal conversion APP outputs final face frames and face feature points through O-Net of three cascade networks in the MTCNN model according to the high-precision candidate windows, wherein the face feature points at least comprise left eyes, right eyes, noses, left mouth angles and right mouth angles.
The P-Net is a propofol Network, and this technique is generally used to generate a face candidate window quickly.
The R-Net is a Refine Network, and the technology is usually used for further selecting and adjusting a face candidate window generated by the P-Net, so that the technical effects of high-precision filtering and face region optimization are achieved.
The O-Net is an Output Network, and the technology is generally used for further filtering and optimizing the R-Net filtered face candidate window and outputting facial feature points.
All three technologies mentioned above are prior arts and will not be discussed here.
By adopting the scheme, the facial signal conversion APP accurately extracts the characteristic points in the facial image through P-Net, R-Net and O-Net in the MTCNN model, the processing speed is high, and the characteristic points are calibrated more accurately.
Further describing, the facial signal conversion APP converts facial feature points into wheelchair control signals through the control signal conversion module, specifically as:
the facial signal conversion APP obtains the distance D1 between the left eye and the right eye, the distance D2 between the left eye and the nose, the distance D3 between the right eye and the nose, the distance D4 from the left mouth corner to the nose and the distance D5 from the right mouth corner to the nose through the control signal conversion module.
The rotation determination threshold value is set with a left rotation determination value of X1, a right rotation determination value of X2, a reverse determination value of X3, and a stop determination value of X4.
When the control signal conversion module detects that the distance D1 between the left eye and the right eye is within X1 times of the distance D2 between the left eye and the nose, the control signal conversion module outputs a wheelchair left-turning control signal.
When the control signal conversion module detects that the distance D1 between the left eye and the right eye is within X2 times of the distance D3 between the right eye and the nose, the control signal conversion module outputs a wheelchair right-turning control signal.
When the control signal conversion module detects that the maximum value of the distance D3 between the right eye and the nose and the distance D3 between the right eye and the nose exceeds X3 times of the maximum value of the distance D4 between the left corner of mouth and the nose and the distance D5 between the right corner of mouth and the nose, the control signal conversion module outputs a wheelchair retreat control signal.
When the control signal conversion module detects that the maximum one of the distance D4 from the left corner of mouth to the nose and the distance D5 from the right corner of mouth to the nose exceeds X4 times of the maximum one of the distance D3 from the right eye to the nose and the distance D3 from the right eye to the nose, the control signal conversion module outputs a wheelchair stop control signal.
By adopting the scheme, the distance relation between the characteristic points of the face image is realized according to the actual mass test data, so that the corresponding wheelchair control signal is output, and the wheelchair is controlled.
Further, the wireless communication module adopts a bluetooth 5.0 communication technology, and the bluetooth of the intelligent terminal is connected with the signal receiving end of the wheelchair to transmit the wheelchair control signal to the signal receiving end of the wheelchair.
By adopting the scheme, the Bluetooth 5.0 communication technology is the prior art, the phenomenon that a communication line interferes with the conversion of the wheelchair control signal is avoided, and the wireless communication technology is mature, good in transmission performance, low in power and easy to carry.
The invention has the beneficial effects that: the face signal conversion APP acquires a face image of a wheelchair user through a camera on the intelligent terminal, acquires the face image to obtain feature points of the face image, converts the feature points into wheelchair control signals and sends the wheelchair control signals to the wheelchair; the main control unit on the wheelchair receives and processes the wheelchair control signal, converts the wheelchair control signal into instructions such as left-turn or right-turn, or back-up or stop, and controls the motor to execute a corresponding control command.
Drawings
FIG. 1 is a flow chart of a control method of an intelligent wheelchair based on facial keypoint motion following according to the present invention;
FIG. 2 is a circuit block diagram of the facial keypoint control circuit of the present invention;
FIG. 3 is a circuit diagram of the main control unit of the present invention;
FIG. 4 is a circuit diagram of the power supply unit of the present invention;
FIG. 5 is a flowchart illustrating the step S2 in the present invention;
FIG. 6 is a flowchart of obtaining facial image feature points in an MTCNN model according to the present invention;
FIG. 7 is a diagram illustrating a process of obtaining feature points of a face image from a face image according to the present invention;
FIG. 8 is a schematic diagram of the conversion of a facial image into a wheelchair left turn control signal in accordance with the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to the accompanying drawings and the detailed description.
It should be understood that the detailed description and specific examples, while indicating the scope of the invention, are intended for purposes of illustration only and are not intended to limit the scope of the invention.
In this embodiment:
as can be seen from figure 2, an intelligent wheelchair based on facial key point motion is followed, includes the wheelchair body, is provided with facial key point control circuit on the wheelchair body, and facial key point control circuit includes wireless communication receiving element, main control unit, left wheel motor drive unit and right wheel motor drive unit.
The signal input end of the wireless communication receiving unit is used as the signal receiving end of the wheelchair body to receive wheelchair control signals, the output end of the wireless communication receiving unit is connected with the control signal input end of the main control unit, and the signal output end of the main control unit is respectively connected with the signal input end of the left wheel motor driving unit and the signal input end of the right wheel motor driving unit.
As can be seen from fig. 1 and fig. 2, the main control unit adopts a main control chip U1 with model number STM32F103VET6, the fifty-fifth pin, the seventy-fifth pin, the one-hundred pin, the twenty-eighth pin and the eleventh pin of the main control chip U1 are interconnected and connected with the power supply VP, the fifty-fifth pin of the main control chip U1 is grounded through a twelfth capacitor C12, a thirteenth capacitor C13, a fourteenth capacitor C14, a fifteenth capacitor C15, a sixteenth capacitor C16 and a seventeenth capacitor C17 respectively, the sixth pin of the main control chip U1 is connected with the fifty pin of the main control chip U1 through a fifth resistor R5, the twenty-second pin of the main control chip U1 is grounded through a first bead FB1 and a nineteenth capacitor C19 in turn, the common terminal of the first bead FB1 and the nineteenth capacitor C19 is connected with the power supply VP, the twenty-second pin of the main control chip 46u 48 serves as the output terminal of the power supply VA, the twenty-eighth pin of the chip U6 is connected with the power supply VA, the twenty-first pin of the main control chip U1 is also grounded via an eleventh capacitor C11, and the forty-ninth, seventy-fourth, ninety-ninth, twenty-seventh, tenth, and nineteenth pins of the main control chip U1 are interconnected and grounded.
The eighth pin of the main control chip U1 is grounded via an eighth capacitor C8, the ninth pin of the main control chip U1 is grounded via a ninth capacitor C9, the eighth pin of the main control chip U1 is further grounded via a first crystal oscillator X1 to the ninth pin of the main control chip U1, the fifty-fifth pin of the main control chip U1 is grounded via a third resistor R3 to the anode of the third diode D3, the cathode of the third diode D3 is grounded, the fifty-sixth pin of the main control chip U1 is grounded via a fourth resistor R4 to the anode of the fourth diode D4, the cathode of the fourth diode D4 is grounded, the twelfth pin of the main control chip U1 is grounded via a tenth capacitor C10, the thirteenth pin of the main control chip U1 is grounded via an eighth resistor R8 and an eighteenth capacitor C8 in turn, the twelfth pin of the main control chip U8 is grounded via a second crystal oscillator X8, a seventh resistor R8 and a common terminal VP 8 of the eighteenth capacitor C8, the fourteenth power supply terminal VP 8, the fourteenth pin of the main control chip U1 is grounded through a twenty-second capacitor C22, and the ninety-fourth pin of the main control chip U1 is grounded through a ninth resistor R9.
The ninety-fifth pin, the ninety-sixth pin, the forty-seventh pin and the forty-eighth pin of the main control chip U1 are used as signal output ends of a left wheel motor driving unit to be connected with the left wheel motor driving unit, the fifty-first pin, the fifty-second pin, the fifty-third pin and the fifty-fourth pin of the main control chip U1 are used as signal output ends of a right wheel motor driving unit to be connected with the right wheel motor driving unit, and the sixty-eight pin and the sixty-nine pin of the main control chip U1 are used as wireless communication control signal input ends to be connected with a wireless communication receiving unit.
As can be seen from fig. 1 and 2, the facial keypoint control circuit further comprises an overspeed protection unit and a power supply unit, the power supply unit supplies power to the main control unit and the overspeed protection unit.
The power supply unit adopts a voltage drop regulator U2 with the model number AMS1117-3.3, the input terminal of the voltage drop stabilizer U1 is grounded through a first capacitor C1 and a second capacitor C2 respectively, the input end of the voltage-drop stabilizer U1 is further connected to the source of a first MOS transistor Q1, the gate of the first MOS transistor Q1 is grounded via a second resistor R2, the drain of the first MOS transistor Q1 is connected to the power supply end of the battery via a first fuse, the drain of the first MOS transistor Q1 is grounded via a fifth capacitor C5, the drain of the first MOS transistor Q1 is connected to the cathode of a first zener diode D1, the anode of the first zener diode D1 is grounded, the drain of the first MOS transistor Q1 is also used as the output end of a power supply VB, the output ends of the voltage-drop stabilizer U1 are grounded via a third capacitor C3 and a fourth capacitor C4, the output end of the voltage-drop stabilizer U1 is connected to the anode of a second diode D2 via a first resistor R1, the cathode of the second diode D2 is grounded, and the output end of the voltage-drop stabilizer U1 is used as the output end of.
And the twenty-second pin and the twenty-third pin of the main control chip U1 are used as an overspeed protection signal input end to be connected with an overspeed protection unit.
Based on the safety consideration of disabled people, the wheelchair can be additionally provided with an emergency stop unit and an alarm unit, a speed sensor is arranged in the emergency stop unit and used for detecting the running speed of the wheelchair, a speed threshold value is arranged in a main control unit, and when the running speed of the wheelchair exceeds the speed threshold value, the main control unit can send an emergency stop signal and control the wheelchair to stop slowly.
The alarm unit realizes two function alarms, the first alarm is an emergency alarm, a facial feature point relation of a specific emergency is set in the mobile phone facial signal conversion APP, when the disabled needs the emergency, the specific facial action is performed according to the disabled, the facial signal conversion APP reads the request, and the mobile phone can automatically ask for help to a nearby hospital; the second kind is reported to the police for the help, and when the wheelchair broke down or turned on one's side promptly, set for the facial feature point relation of a specific help in cell-phone facial signal conversion APP, and in the same way, make this kind of specific facial action at the disabled person, make facial signal conversion APP read this kind of request, and the accessible cell-phone is automatic to send help information to family.
As can be seen from fig. 1 to 6, in the embodiment of the present invention, an intelligent terminal uses a mobile phone, and performs preprocessing: the method comprises the following steps of (1) installing a mobile phone with a camera on a wheelchair, so that the mobile phone can acquire facial images of a user; facial signal conversion APP is installed to the cell-phone, and this facial signal conversion APP includes that the model calls module, control signal conversion module and wireless communication module, sets for the wheelchair action that every kind of facial image corresponds, and facial image divide into four kinds: the wheelchair comprises a left-turn face image, a right-turn face image, a stop face image and a retreat face image, wherein the left-turn face image corresponds to the left turn of the wheelchair, the right-turn face image corresponds to the right turn of the wheelchair, the stop face image corresponds to the stop of the wheelchair, and the retreat face image corresponds to the retreat of the wheelchair.
The face signal conversion APP in the present embodiment was developed by the inventors, and models and frameworks in the Opencv library, the tensrflow framework, and the MTCNN model involved therein are called via the internet.
S1: the face signal conversion APP collects the face image of the current wheelchair user through a camera of the mobile phone.
S2: the face signal conversion APP obtains face feature points in the face image through a model calling module.
S3: the face signal conversion APP converts the face feature points into wheelchair control signals through the control signal conversion module.
S4: facial signal conversion APP passes through wireless communication module, sends the signal receiving terminal of wheelchair control signal warp cell-phone to the wheelchair to control the wheelchair and carry out corresponding action.
As can be seen from fig. 1, an Opencv library is arranged in the facial signal conversion APP, and the facial signal conversion APP calls a camera of a mobile phone through a camera API in the Opencv library to acquire a facial image of a current wheelchair user.
A TensorFlow framework with a multi-neural network layer is arranged in a model calling module of the face signal conversion APP, and an MTCNN model is arranged in the TensorFlow framework.
A rotation determination threshold is set in a control signal conversion module of a face signal conversion APP.
As can be seen from fig. 5, the step of obtaining the facial feature points by the facial signal conversion APP via the model calling module in step S2 is as follows:
s21: a model calling module of the face signal conversion APP receives the collected face image in a picture stream mode, and the face image is subjected to image preprocessing in the MTCNN model.
S22: and the face signal conversion APP obtains face image feature points from the preprocessed face image through a model calling module in the MTCNN model and follows the face image feature points.
As can be seen from fig. 7, the image preprocessing of the face image in the MTCNN model specifically includes: and constructing an image pyramid of the facial image in the MTCNN model, selecting the facial image according to a preset sampling condition, and taking the facial image as a preprocessed facial image.
As can be seen from fig. 6 and 7, the facial signal conversion APP obtains facial image feature points of a preprocessed facial image through a model calling module in the MTCNN model specifically as follows:
s221: and the face signal conversion APP rapidly generates candidate windows from the preprocessed face images through P-Net of three cascade networks in the MTCNN model.
S222: and the face signal conversion APP carries out high-precision candidate window filtering selection on the fast generated candidate windows through the R-Net of three cascade networks in the MTCNN model.
S223: and the face signal conversion APP outputs a final face frame and face feature points through O-Net of three cascade networks in the MTCNN model according to the high-precision candidate window, wherein the face feature points comprise a left eye, a right eye, a nose, a left mouth angle and a right mouth angle, and the total amount of five feature points.
As can be seen from fig. 7 and 8, the conversion of the facial feature points into wheelchair control signals by the control signal conversion module through the facial signal conversion APP is specifically as follows:
in the present embodiment, the left-turn determination value X1 is set to 1.2, the right-turn determination value X2 is set to 1.1, the back determination value 3 is set to 2.7, and the stop determination value X4 is set to 2.7, based on a large amount of data and proof of practice.
The facial signal conversion APP obtains a distance D1 between the left eye and the right eye, a distance D2 between the left eye and the nose, a distance D3 between the right eye and the nose, a distance D4 between the left mouth corner and the nose and a distance D5 between the right mouth corner and the nose through the control signal conversion module.
The rotation determination threshold value is set with the left rotation determination value X1, the right rotation determination value X2, the reverse determination value X3, and the stop determination value X4.
When the control signal conversion module detects that the distance D1 between the left eye and the right eye is within 1.2 times of the distance D2 between the left eye and the nose, the control signal conversion module outputs a wheelchair left-turning control signal.
When the control signal conversion module detects that the distance D1 between the left eye and the right eye is within 1.1 times of the distance D3 between the right eye and the nose, the control signal conversion module outputs a wheelchair right-turning control signal.
When the control signal conversion module detects that the maximum one of the distance D3 between the right eye and the nose and the distance D3 between the right eye and the nose is more than 2.7 times of the maximum one of the distance D4 between the corner of the mouth and the nose and the distance D5 between the corner of the right mouth and the nose, the control signal conversion module outputs a wheelchair retreat control signal.
When the control signal conversion module detects that the maximum value of the distance D4 from the left corner of the mouth to the nose and the distance D5 from the right corner of the mouth to the nose exceeds 2.7 times of the maximum value of the distance D3 from the right eye to the nose and the distance D3 from the right eye to the nose, the control signal conversion module outputs a wheelchair stop control signal.
As can be seen from fig. 2, the wireless communication module adopts 5.0 Bluetooth communication technology, and Bluetooth of the mobile phone is connected with the signal receiving end of the wheelchair to transmit the wheelchair control signal to the signal receiving end of the wheelchair.
The working principle of the invention is as follows:
the face signal conversion APP obtains face images of the wheelchair driver and the passenger through a mobile phone or a flat camera.
The face signal conversion APP performs image preprocessing on an MTCNN model of the acquired face image in a TensorFlow frame, a series of images which are arranged in a pyramid shape and have gradually reduced resolution are selected, and the face image is selected according to a preset sampling condition.
The face signal conversion APP enables the preprocessed face images to rapidly generate candidate windows in the MTCNN model through P-Net in three cascade networks; R-Net in the three cascade networks quickly generates candidate windows to carry out high-precision candidate window filtering selection; and the O-Net in the three cascade networks outputs a final face frame and face feature points according to the high-precision candidate window, wherein the face feature points comprise five feature points of a left eye, a right eye, a nose, a left mouth angle and a right mouth angle.
The face signal conversion APP determines a corresponding wheelchair control signal according to the distance relationship among five feature points of the face image, wherein the signal comprises: a wheelchair left turn control signal, a wheelchair right turn control signal, a wheelchair retreat control signal, and a wheelchair stop control signal.
The face signal conversion APP sends the wheelchair control signal to the wireless communication receiving unit of the wheelchair through the Bluetooth of the mobile phone, and the main control unit of the wheelchair receives the wheelchair control signal and controls the wheelchair to execute corresponding actions.

Claims (9)

1. A control method of an intelligent wheelchair based on face key point motion following is characterized in that,
pretreatment: installing an intelligent terminal with a camera on the wheelchair, so that the intelligent terminal can acquire facial images of a user; the intelligent terminal is provided with a face signal conversion APP, and the face signal conversion APP comprises a model calling module, a control signal conversion module and a wireless communication module; setting the wheelchair action corresponding to each type of facial image;
s1: the face signal conversion APP acquires a face image of a current wheelchair user through a camera of the intelligent terminal;
s2: the face signal conversion APP obtains face feature points in the face image through a model calling module;
s3: the face signal conversion APP converts the face feature points into wheelchair control signals through a control signal conversion module;
s4: facial signal conversion APP passes through wireless communication module, sends the signal receiving terminal of wheelchair control signal through intelligent terminal to the wheelchair to control the wheelchair action.
2. The method for controlling the intelligent wheelchair based on the movement following of the facial key points as claimed in claim 1, wherein an Opencv library is arranged in the facial signal conversion APP, the facial signal conversion APP calls a camera of the intelligent terminal through a camera API in the Opencv library, and acquires a facial image of a current wheelchair user;
a TensorFlow frame with a multi-neural network layer is arranged in a model calling module of the face signal conversion APP, and an MTCNN model is arranged in the TensorFlow frame;
and a rotation judgment threshold is set in a control signal conversion module of the face signal conversion APP.
3. The method for controlling an intelligent wheelchair based on motion following of facial key points as claimed in claim 2, wherein the step of obtaining facial feature points through the model calling module by the facial signal conversion APP in step S2 is as follows:
s21: a model calling module of the face signal conversion APP receives the collected face image in a picture stream mode, and the face image is subjected to image preprocessing in the MTCNN model;
s22: and the face signal conversion APP obtains face image feature points from the preprocessed face image through a model calling module in the MTCNN model and follows the face image feature points.
4. The method for controlling an intelligent wheelchair based on motion following of facial key points as claimed in claim 3, wherein the facial signal conversion APP obtains facial image feature points from a preprocessed facial image through a model calling module in the MTCNN model by:
s221: the face signal conversion APP rapidly generates candidate windows from the preprocessed face images through P-Net of three cascade networks in the MTCNN model;
s222: the face signal conversion APP carries out high-precision candidate window filtering selection on the rapidly generated candidate windows through R-Net of three cascade networks in the MTCNN model;
s223: the face signal conversion APP outputs final face frames and face feature points through O-Net of three cascade networks in the MTCNN model according to the high-precision candidate windows;
the facial feature points include at least left eye, right eye, nose, left mouth corner and right mouth corner.
5. The method for controlling an intelligent wheelchair based on motion following of facial key points as claimed in claim 4, wherein the conversion of facial feature points into wheelchair control signals by the facial signal conversion APP through the control signal conversion module is specifically as follows:
the facial signal conversion APP acquires a distance D1 between a left eye and a right eye, a distance D2 between the left eye and a nose, a distance D3 between the right eye and the nose, a distance D4 from a left mouth corner to the nose and a distance D5 from the right mouth corner to the nose in the facial feature points through the control signal conversion module;
the rotation determination threshold value is set with a left rotation determination value of X1, a right rotation determination value of X2, a reverse determination value of X3, and a stop determination value of X4;
when the control signal conversion module detects that the distance D1 between the left eye and the right eye is within X1 times of the distance D2 between the left eye and the nose, the control signal conversion module outputs a wheelchair left-turning control signal;
when the control signal conversion module detects that the distance D1 between the left eye and the right eye is within X2 times of the distance D3 between the right eye and the nose, the control signal conversion module outputs a wheelchair right-turning control signal;
when the control signal conversion module detects that the maximum one of the distance D3 between the right eye and the nose and the distance D3 between the right eye and the nose exceeds X3 times of the maximum one of the distance D4 between the left corner of mouth and the nose and the distance D5 between the right corner of mouth and the nose, the control signal conversion module outputs a wheelchair retreat control signal;
when the control signal conversion module detects that the maximum one of the distance D4 from the left corner of mouth to the nose and the distance D5 from the right corner of mouth to the nose exceeds X4 times of the maximum one of the distance D3 from the right eye to the nose and the distance D3 from the right eye to the nose, the control signal conversion module outputs a wheelchair stop control signal.
6. The method for controlling an intelligent wheelchair based on the following of the movement of the facial key points as claimed in claim 1 or 2, wherein the wireless communication module adopts a bluetooth communication technology, and bluetooth of the intelligent terminal is connected with a signal receiving end of the wheelchair to transmit the wheelchair control signal to the signal receiving end of the wheelchair.
7. An intelligent wheelchair based on facial key point motion following comprises a wheelchair body and is characterized in that a facial key point control circuit is arranged on the wheelchair body, and the facial key point control circuit comprises a wireless communication receiving unit, a main control unit, a left wheel motor driving unit and a right wheel motor driving unit;
the signal input end of the wireless communication receiving unit serves as the signal receiving end of the wheelchair body to receive the wheelchair control signal, the output end of the wireless communication receiving unit is connected with the control signal input end of the main control unit, and the signal output end of the main control unit is respectively connected with the signal input end of the left wheel motor driving unit and the signal input end of the right wheel motor driving unit.
8. The intelligent wheelchair based on facial key point motion following as claimed in claim 7, wherein the main control unit adopts a main control chip U1 model STM32F103VET6, the fiftieth pin, the seventy-fifth pin, the hundred th pin, the twenty-eighth pin and the eleventh pin of the main control chip U1 are interconnected and connected with a power supply VP, the fiftieth pin of the main control chip U1 is grounded through a twelfth capacitor C12, a thirteenth capacitor C13, a fourteenth capacitor C14, a fifteenth capacitor C15, a sixteenth capacitor C16 and a seventeenth capacitor C17 respectively, the sixth pin of the main control chip U1 is connected with the fifty pin of the main control chip U1 through a fifth resistor R5, the twenty-second pin of the main control chip U1 is grounded through a first magnetic bead FB1 and a nineteenth capacitor C19 in turn, the common terminal of the first magnetic bead FB1 and the nineteenth capacitor C19 is connected with the power supply VP, a twenty-second pin of the main control chip U1 is used as an output end of a power supply VA, a twentieth pin of the main control chip U1 is grounded, a twenty-first pin of the main control chip U1 is connected with the power supply VA through a sixth resistor R6, a twenty-first pin of the main control chip U1 is grounded through an eleventh capacitor C11, and a forty-ninth pin, a seventy-fourth pin, a ninety-ninth pin, a twenty-seventh pin, a tenth pin and a nineteenth pin of the main control chip U1 are interconnected and grounded;
the eighth pin of the main control chip U1 is grounded through an eighth capacitor C8, the ninth pin of the main control chip U1 is grounded through a ninth capacitor C9, the eighth pin of the main control chip U1 is further grounded through a first crystal oscillator X1 to the ninth pin of the main control chip U1, the fifty-fifth pin of the main control chip U1 is grounded through a third resistor R3 to the anode of a third diode D3, the cathode of the three diode D3 is grounded, the fifty-sixth pin of the main control chip U1 is grounded through a fourth resistor R4 to the anode of a fourth diode D4, the cathode of the four diode D4642 is grounded, the twelfth pin of the main control chip U1 is grounded through a tenth capacitor C9, the thirteenth pin of the main control chip U1 is grounded through an eighth resistor R8 and an eighteenth capacitor C18 in turn, the twelfth pin of the main control chip U1 is grounded through a second crystal oscillator X2, a seventh resistor R7 and an eighteenth resistor R18 and an eighteenth capacitor R8, the fourteenth pin of the main control chip U1 is connected with the power supply VP through a tenth resistor R10, the fourteenth pin of the main control chip U1 is grounded through a twenty-second capacitor C22, and the ninety-fourth pin of the main control chip U1 is grounded through a ninth resistor R9;
the ninety-fifth pin, the ninety-sixth pin, the forty-seventh pin and the forty-eighth pin of the main control chip U1 are used as signal output ends of a left wheel motor driving unit to be connected with the left wheel motor driving unit, the fifty-first pin, the fifty-second pin, the fifty-third pin and the fifty-fourth pin of the main control chip U1 are used as signal output ends of a right wheel motor driving unit to be connected with the right wheel motor driving unit, and the sixty-eight pin and the sixty-nine pin of the main control chip U1 are used as input ends of wireless communication control signals to be connected with the wireless communication receiving unit.
9. The intelligent wheelchair based on facial keypoint motion following according to claim 8, characterized in that said facial keypoint control circuit further comprises an overspeed protection unit and a power supply unit, said power supply unit supplying power to said main control unit and to said overspeed protection unit;
the power supply unit adopts a voltage drop stabilizer U2 with the model number AMS1117-3.3, the input end of the voltage drop stabilizer U1 is grounded through a first capacitor C1 and a second capacitor C2 respectively, the input end of the voltage drop stabilizer U1 is also connected with the source electrode of a first MOS tube Q1, the gate electrode of the first MOS tube Q1 is grounded through a second resistor R2, the drain electrode of the first MOS tube Q1 is connected with the power supply end of a battery through a first fuse, the drain electrode of the first MOS tube Q1 is grounded through a fifth capacitor C5, the drain electrode of the first MOS tube Q1 is connected with the cathode electrode of a first voltage stabilizing diode D1, the anode electrode of the first voltage stabilizing diode D1 is grounded, the drain electrode of the first MOS tube Q84 is also used as the output end of a power supply VB, the output end of the voltage drop U1 is grounded through a third capacitor C3 and a fourth capacitor C4 respectively, the output end of the voltage drop stabilizer U1 is connected with the anode electrode of a second diode R375 and the cathode electrode of a second diode R5857324, the output end of the voltage drop stabilizer U1 is used as the output end of the power supply VP;
and the twenty-second pin and the twenty-third pin of the main control chip U1 are used as overspeed protection signal input ends to be connected with the overspeed protection unit.
CN202010407075.7A 2020-05-14 2020-05-14 Intelligent wheelchair based on face key point motion following and control method thereof Pending CN111538344A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010407075.7A CN111538344A (en) 2020-05-14 2020-05-14 Intelligent wheelchair based on face key point motion following and control method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010407075.7A CN111538344A (en) 2020-05-14 2020-05-14 Intelligent wheelchair based on face key point motion following and control method thereof

Publications (1)

Publication Number Publication Date
CN111538344A true CN111538344A (en) 2020-08-14

Family

ID=71975942

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010407075.7A Pending CN111538344A (en) 2020-05-14 2020-05-14 Intelligent wheelchair based on face key point motion following and control method thereof

Country Status (1)

Country Link
CN (1) CN111538344A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201716580U (en) * 2010-06-02 2011-01-19 华中科技大学 Electric wheelchair control system
CN101889928B (en) * 2010-07-27 2012-04-18 北京理工大学 Head gesture recognition technology-based wheelchair control method
CN105105938A (en) * 2015-07-14 2015-12-02 南京邮电大学 Intelligent wheelchair control method and system based on face orientation identification and tracking
CN105125355A (en) * 2015-08-13 2015-12-09 重庆科技学院 Tongue control system and tongue control method for disabled-aid wheelchair
CN106023373A (en) * 2016-05-23 2016-10-12 三峡大学 Big data and human face identification based access control system for school dormitory
CN106534684A (en) * 2016-11-15 2017-03-22 极翼机器人(上海)有限公司 Terminal control system and operation method thereof
CN108272566A (en) * 2018-03-17 2018-07-13 郑州大学 A kind of intelligent wheel chair based on multi-modal man-machine interface
CN109344773A (en) * 2018-10-06 2019-02-15 广州智体科技有限公司 A kind of taxi driver's face identification system method and device
CN109948672A (en) * 2019-03-05 2019-06-28 张智军 A kind of wheelchair control method and system
CN110197146A (en) * 2019-05-23 2019-09-03 招商局金融科技有限公司 Facial image analysis method, electronic device and storage medium based on deep learning
CN110263691A (en) * 2019-06-12 2019-09-20 合肥中科奔巴科技有限公司 Head movement detection method based on android system
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point
CN209575092U (en) * 2018-08-16 2019-11-05 广东工贸职业技术学院 Intelligent wheel chair control device, intelligent wheel chair and remote health monitoring intelligent wheelchair system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201716580U (en) * 2010-06-02 2011-01-19 华中科技大学 Electric wheelchair control system
CN101889928B (en) * 2010-07-27 2012-04-18 北京理工大学 Head gesture recognition technology-based wheelchair control method
CN105105938A (en) * 2015-07-14 2015-12-02 南京邮电大学 Intelligent wheelchair control method and system based on face orientation identification and tracking
CN105125355A (en) * 2015-08-13 2015-12-09 重庆科技学院 Tongue control system and tongue control method for disabled-aid wheelchair
CN106023373A (en) * 2016-05-23 2016-10-12 三峡大学 Big data and human face identification based access control system for school dormitory
CN106534684A (en) * 2016-11-15 2017-03-22 极翼机器人(上海)有限公司 Terminal control system and operation method thereof
CN108272566A (en) * 2018-03-17 2018-07-13 郑州大学 A kind of intelligent wheel chair based on multi-modal man-machine interface
CN209575092U (en) * 2018-08-16 2019-11-05 广东工贸职业技术学院 Intelligent wheel chair control device, intelligent wheel chair and remote health monitoring intelligent wheelchair system
CN109344773A (en) * 2018-10-06 2019-02-15 广州智体科技有限公司 A kind of taxi driver's face identification system method and device
CN109948672A (en) * 2019-03-05 2019-06-28 张智军 A kind of wheelchair control method and system
CN110197146A (en) * 2019-05-23 2019-09-03 招商局金融科技有限公司 Facial image analysis method, electronic device and storage medium based on deep learning
CN110263691A (en) * 2019-06-12 2019-09-20 合肥中科奔巴科技有限公司 Head movement detection method based on android system
CN110399844A (en) * 2019-07-29 2019-11-01 南京图玩智能科技有限公司 It is a kind of to be identified and method for tracing and system applied to cross-platform face key point

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
翁磊,等: ""应用于智能轮椅控制的头部姿态识别"", 《电子测量技术》 *

Similar Documents

Publication Publication Date Title
CN112660157B (en) Multifunctional remote monitoring and auxiliary driving system for barrier-free vehicle
CN106572581A (en) Highway tunnel lamp control and energy consumption monitoring system based on area controllers
CN111787218A (en) Monitoring camera based on digital retina technology
CN105590084A (en) Robot human face detection tracking emotion detection system
CN111538344A (en) Intelligent wheelchair based on face key point motion following and control method thereof
Miao et al. Vehicle control system based on dynamic traffic gesture recognition
CN202923394U (en) Intelligent navigator safe driving system based on Andriod system
CN114415695A (en) Tea garden inspection system based on vision technology and inspection robot
CN113160260A (en) Head-eye double-channel intelligent man-machine interaction system and operation method
CN108805085A (en) Intelligent sleep detection method based on eye recognition and system
CN112735083A (en) Embedded gateway for flame detection by using YOLOv5 and OpenVINO and deployment method thereof
CN210270119U (en) Car light detection device based on CAN bus control
CN116645587A (en) Image recognition model generation method, image recognition method, system and chip
CN209575092U (en) Intelligent wheel chair control device, intelligent wheel chair and remote health monitoring intelligent wheelchair system
CN112672468A (en) Wisdom street lamp system based on thing networking
Sahu Automatic camera based eye controlled wheelchair system using raspberry pi
CN112014763A (en) Vehicle lamp detection device based on CAN bus control and detection method thereof
CN106710029A (en) Intelligent automobile data recorder based on wireless communication
CN215814080U (en) Head-eye double-channel intelligent man-machine interaction system
CN115115999A (en) Method, system and storage medium for detecting employee sleep post based on artificial intelligence
CN107432794A (en) A kind of Intelligent Recognition and the method driven safely
CN113887414A (en) Target detection method, target detection device, electronic equipment and storage medium
CN105825631A (en) Video intelligent algorithm-based fatigue detection method and system
CN207257385U (en) A kind of vehicle drivers status monitors system
Peng Design and Implementation of Intelligent Car based on Machine Vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200814

RJ01 Rejection of invention patent application after publication