CN113524182A - Device and method for intelligently adjusting distance between person and screen - Google Patents

Device and method for intelligently adjusting distance between person and screen Download PDF

Info

Publication number
CN113524182A
CN113524182A CN202110789453.7A CN202110789453A CN113524182A CN 113524182 A CN113524182 A CN 113524182A CN 202110789453 A CN202110789453 A CN 202110789453A CN 113524182 A CN113524182 A CN 113524182A
Authority
CN
China
Prior art keywords
mechanical arm
fatigue
steering engine
detection module
serial port
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110789453.7A
Other languages
Chinese (zh)
Other versions
CN113524182B (en
Inventor
刘维凯
赵晏斌
张�浩
段天奇
白婷婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Petroleum University
Original Assignee
Northeast Petroleum University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Petroleum University filed Critical Northeast Petroleum University
Priority to CN202110789453.7A priority Critical patent/CN113524182B/en
Publication of CN113524182A publication Critical patent/CN113524182A/en
Application granted granted Critical
Publication of CN113524182B publication Critical patent/CN113524182B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J17/00Joints
    • B25J17/02Wrist joints
    • B25J17/0258Two-dimensional joints
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J18/00Arms
    • B25J18/02Arms extensible
    • B25J18/025Arms extensible telescopic
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09FDISPLAYING; ADVERTISING; SIGNS; LABELS OR NAME-PLATES; SEALS
    • G09F9/00Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements
    • G09F9/30Indicating arrangements for variable information in which the information is built-up on a support by selection or combination of individual elements in which the desired character or characters are formed by combining individual elements

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The invention discloses a device and a method for intelligently adjusting the distance between a person and a screen. The device comprises a display screen, and a camera sensor, a telescopic mechanical arm, an image recognition module, a fatigue detection module, a signal processing module and a driving motor controller which are connected with each other; the telescopic mechanical arm is connected with the camera sensor, and the driving motor controller is used for controlling the telescopic movement of the telescopic mechanical arm; the image recognition module is used for recognizing the existence state of a person in front of a screen and transmitting the recognized result into the drive motor controller; the fatigue detection module is used for identifying human body characteristics and behavior modes and judging whether a user is in a fatigue state; and the driving motor controller receives signals transmitted back by the image recognition module and the fatigue detection module and controls the action of the telescopic mechanical arm under the action of a built-in program.

Description

Device and method for intelligently adjusting distance between person and screen
Technical Field
The invention relates to a display screen support, in particular to a display screen support capable of intelligently adjusting the distance between a person and a screen.
Background
With the continuous improvement of science and technology, electronic equipment such as computers and televisions generally exist in daily life of people and become a main medium for people to learn and know external information, so that the installation and the telescopic movement of the equipment such as the computers and the televisions become a demand of people. The screen is installed on a fixed installation frame, the position of the screen can be adjusted only by manual operation of people, and a mechanical device is not used for automatically adjusting the front and back displacement of the screen in a telescopic mode. Therefore, there is an urgent need to develop an apparatus and method for intelligently adjusting the distance between a person and a screen, which can automatically adjust the horizontal forward and backward displacement of the screen conveniently, and detect the existence state and posture characteristics of the person in real time to realize intelligent movement.
Disclosure of Invention
In order to solve the technical problems mentioned in the background art, the invention provides a device and a method for intelligently adjusting the distance between a person and a screen, which can effectively solve the problem that all supports on the market are manually adjusted at present, effectively liberate both hands of the person and further improve the working efficiency of people.
The technical scheme of the invention is as follows: the device for intelligently adjusting the distance between a person and a screen comprises a display screen, and a camera sensor, a telescopic mechanical arm, an image recognition module, a fatigue detection module, a signal processing module and a driving motor controller which are connected. The telescopic mechanical arm is connected with the camera sensor, and the driving motor controller is used for controlling the telescopic movement of the telescopic mechanical arm; the image recognition module is used for recognizing the existence state of a person in front of a screen and transmitting the recognized result into the drive motor controller; the fatigue detection module is used for identifying human body characteristics and behavior modes and judging whether a user is in a fatigue state; the driving motor controller receives signals transmitted back by the image recognition module and the fatigue detection module and controls the action of the telescopic mechanical arm under the action of the built-in program and the signal processing module.
The driving motor controller comprises a digital steering engine 11, a first serial port steering engine 12 and a second serial port steering engine 13; the digital steering engine 11 adopts an MG995 model digital steering engine, and the first serial port steering engine 12 and the second serial port steering engine 13 both adopt MG996R model serial port motors; a central processing unit of the driving motor controller controls the movement of the steering engine according to the length of a clock pulse signal; the digital steering engine 11 simultaneously controls a first serial port steering engine 12 and a second serial port steering engine 13; the two serial port steering engines are used for controlling the action of the telescopic mechanical arm, and the digital steering engine 11 is used for realizing the direction adjustment of the telescopic mechanical arm.
The telescopic mechanical arm comprises a base 1, a positioning seat 2, a rotating platform 3, a first mechanical arm 4 of a main mechanical arm, a second mechanical arm 5 of the main mechanical arm, a triangular rotating sheet 6, an auxiliary mechanical arm 7, a first connecting rod 8, a crank 9, a rotating gear 14, a second connecting rod 15 and a third connecting rod 16.
The base 1 is embedded with the positioning seat 2, the digital steering engine 11 is located in the base 1, a programmable logic chip is contained in the digital steering engine, and PWN control signals are sent to the first serial port steering engine 12 and the second serial port steering engine 13 according to a set program.
The rotary table 3 is buckled with a first mechanical arm 4 of a main mechanical arm, a first serial port steering engine 12 and a second serial port steering engine 13 are respectively arranged on the left side and the right side of the rotary table 3, and the two serial port steering engines are connected with a digital steering engine 11 through wires; the first serial port steering engine 12 is used for controlling the motion and the angle of the first mechanical arm 4 of the main mechanical arm, and simultaneously provides a feedback signal for the digital steering engine through current and voltage signals.
The first mechanical arm 4 of the main mechanical arm is connected with a triangular rotating sheet 6 at one end far away from the rotating table; the second serial port steering engine 13 is used for controlling the motion and the angle of the second mechanical arm 5 of the main mechanical arm; the second mechanical arm 5 of the main mechanical arm is connected with a crank 9; the upper end of the crank 9 is riveted with the first connecting rod 8 through a bearing, and the crank 9 drives the first connecting rod 8 to move.
The two ends of the triangular rotating piece 6 are hinged with the auxiliary mechanical arm 7 and the third connecting rod 16 respectively, and the triangular rotating piece 6 is used for adjusting the angle to ensure that the auxiliary mechanical arm 7 and the third connecting rod 16 stably extend and retract the display 10 in the movement process; one end of the auxiliary mechanical arm 7 and one end of the second connecting rod 15, which are far away from the triangular rotating piece 6, are buckled with the display 10 so as to adjust the direction of the display.
The image recognition module works according to the following modes under the action of the built-in program:
adopting a pre-trained VGG16 convolutional neural network to judge whether a person or other objects appear in front of the camera, and processing the image acquired by the camera in real time into 128 pixels in size in the background once the person appears in front of the camera is determined
Figure 100002_DEST_PATH_IMAGE002
128 pixels image, 16384 pixels, if the face of the identified user occupies more than 30 pixels
Figure 662071DEST_PATH_IMAGE002
30 pixels or more than 900 pixels are used for monitoring the face of a user in front of the camera and activating a face fatigue detection part in a fatigue detection module; if the face of the identified user occupies less than 30 pixels
Figure 484533DEST_PATH_IMAGE002
30 pixels or less than 900 pixels, the whole body/half body monitoring is carried out on the user in front of the camera, and a whole body/half body fatigue detection part in the fatigue detection module is activated; after the fatigue detection module corresponding to the activation is determined, the brightness of the environment is judged according to the Laplace edge detection theory, namely the size of the environment is 128 pixels in the background
Figure 813883DEST_PATH_IMAGE002
The image of 128 pixels is converted into a gray scale image, and 3 is used for the gray scale image
Figure 12784DEST_PATH_IMAGE002
3, convolving the images to obtain edge detection images and calculating the variance of the edge detection images, defining the environment to be in a bright state when the variance of the edge detection images is more than 600, and defining the environment to be in a bright state when the variance of the edge detection images is less than 600At this time, the state is dark; and constructing the information of the activation module and the ambient light and shade information into a matrix and then transmitting the matrix into the fatigue detection module.
The fatigue detection module consists of an LSTM long and short memory neural network + GAN antagonistic neural network and an OpenCV + OpenPose, wherein the LSTM long and short memory neural network + GAN antagonistic neural network is a facial fatigue detection part in the fatigue detection module, and the OpenCV + OpenPose is a whole body/half body fatigue detection part in the fatigue detection module; the fatigue detection module operates in the following mode under the action of a built-in program.
When the fatigue detection module receives face detection activation information and an environment brightness information matrix sent back by the image recognition module, only an LSTM long and short memory neural network is activated to observe 68 key points of the face of a user, winks and counts eye key points, a fatigue judgment formula is established by comparing the winks frequency with the image frame number to serve as first fatigue judgment, then the mouth key points are positioned, whether the user is yawning is judged according to the mouth opening and closing degree, the yawning duration is recorded to serve as second fatigue judgment, finally the head key points are positioned, a perpendicular bisector is established by taking the key points of the part of the lower jaw as normal line points, and the included angle between the face and the perpendicular bisector is calculated to serve as third fatigue judgment; when the fatigue detection module receives the face detection activation information and the environment dark information matrix sent back by the image recognition module, the GAN antagonistic neural network is activated at the moment, the details of five sense organs are recovered based on the pre-trained generated model, the low-quality face data obtained by the camera is restored into a high-quality image, and the LSTM long and short memory neural network is used for carrying out fatigue judgment on 68 key points of the face of the user; when the fatigue detection module receives the whole body/half body detection activation information returned by the image recognition module, activating an OpenCV + openpos part, extracting 1 head key point S1, 2 shoulder key points S2, S3, 2 ankle key points S4, S5, 2 crotch key points S6, S7, 2 knee joint key points S8, S9, 2 ankle key points S10, S11 from the input real-time image through a convolutional neural network, taking the euclidean distances of the head key point S1 and the shoulder key points S2, S3 as a first fatigue judgment, the euclidean distances of the head key point S1 and the ankle key points S4, S5 as a second fatigue judgment, and taking the manhattan distances of the 2 key knee joint points S8, S9 as a third fatigue judgment; and finally, dividing the fatigue state into two types, assigning the fatigue state as 1, assigning the non-fatigue state as 0, and transmitting the fatigue information to the signal processing module in a 1 or 0 signal mode.
And the signal processing module is used for receiving the signals of 1 or 0 transmitted back by the image recognition module and the fatigue detection module, converting the signals into driving electric signals readable by the driving motor controller and commanding the driving motor controller to enter a working state.
The invention has the following beneficial effects: when a user does not work or watch in front of the screen, the mechanical device of the intelligent adjusting screen is not in a working state, and the telescopic mechanical arm is in a dormant state; when the image recognition module and the fatigue detection module monitor that the user is working or watching in front of the screen, the mechanical device of the intelligent adjusting screen enters a working state, immediately transmits image data into the signal processing module to extract and convert image data flow into an effective readable format and transmits the effective readable format into the driving motor controller, then the electric signal command sent by the driving motor controller is transmitted to the telescopic mechanical arm to enable the telescopic mechanical arm to adjust the screen back and forth in the horizontal direction, when the mechanical device of the intelligent adjusting screen is in a working state, the driving motor controller and the telescopic mechanical arm start to automatically push and pull the screen to move back and forth in the horizontal direction when observing that a user continuously exists in the front of the display screen for 30 minutes, and carrying out horizontal forward and backward movement once in an interval of 5-15 minutes after starting work according to the fatigue state of the user.
Description of the drawings:
FIG. 1 is a flow diagram of an image recognition module according to an embodiment of the present invention;
FIG. 2 is a flow diagram of a fatigue detection module according to an embodiment of the invention;
FIG. 3 is a schematic structural diagram of an external appearance according to an embodiment of the present invention;
FIG. 4 is an exploded view of an external structure according to an embodiment of the present invention;
FIG. 5 is a diagram of 68 fatigue detection keypoint locations according to an embodiment of the present invention;
FIG. 6 is a diagram of 11 joint positions identified according to an embodiment of the present invention.
The specific implementation mode is as follows:
the invention will be further described with reference to the accompanying drawings in which:
FIG. 1 is a flow chart of an image recognition module of a device for intelligently adjusting the distance between a person and a screen, which is implemented according to the flow chart of the method, namely, a pre-trained VGG16 convolutional neural network is adopted after a camera sensor receives an image to judge whether the person or other objects appear in front of a camera, and once the person or other objects appear in front of the camera is determined, the image recognition module is started, namely, the image recognition module receives the image acquired by the camera in real time after program initialization and processes the image into a size of 128 pixels in the background
Figure 681662DEST_PATH_IMAGE002
128 pixels image, 16384 pixels, if the face of the identified user occupies more than 30 pixels
Figure 737343DEST_PATH_IMAGE002
30 pixels or more than 900 pixels activate a facial fatigue detection part in a fatigue detection module; if the face of the identified user occupies less than 30 pixels
Figure 491672DEST_PATH_IMAGE002
30 pixels or less than 900 pixels, activating a whole body/half body fatigue detection part in the fatigue detection module, assigning activation information to be 1, and assigning non-activation information to be 0; performing Laplace edge detection while counting the pixels of the face, and determining the darkness of the environment, i.e. measuring 128 pixels in size in the background
Figure 291001DEST_PATH_IMAGE002
The image of 128 pixels is converted into a gray scale image, and 3 is used for the gray scale image
Figure 876703DEST_PATH_IMAGE002
3 laplacian ofConvolving to obtain an edge detection graph and calculating variance of the edge detection graph, defining the environment to be in a bright state when the variance of the edge detection graph is more than 600, defining the environment to be in a dark state when the variance of the edge detection graph is less than 600, assigning the environment bright information to be 1, assigning the environment dark information to be 0, and transmitting the information to a fatigue detection module after constructing a matrix, for example, the human face detection matrix in the bright environment is [0,1,1,0]](ii) a The whole body/half body detection matrix in the dark environment is [1,0,0,1]]。
FIG. 2 is a flow chart of a fatigue detection module, and the device is implemented according to the flow chart of the method, namely when the fatigue detection module receives that the information matrix returned by the image identification module is [0,1,1,0], only the LSTM long-short memory neural network is activated to observe 68 key points of the user's face, and as shown in fig. 5, the key points of the eye are first blink-counted, establishing fatigue discrimination formula by comparing the relationship between blink frequency and image frame number as the first fatigue judgment, secondly, positioning the key points of the mouth part, judging whether the user is yawning according to the opening and closing degree of the mouth part, recording the length of the yawning to be used as a second fatigue judgment, finally positioning the key points of the head part, establishing a perpendicular bisector by taking the key point of the chin as a normal point, and calculating an included angle between the face and the perpendicular bisector to be used as a third fatigue judgment; when the information matrix received by the fatigue detection module and returned by the image recognition module is [1,0,1,0], activating a GAN (global area network) confrontation neural network, recovering details of five sense organs based on a pre-trained generated model, restoring low-quality face data obtained by a camera into a high-quality image, and performing fatigue judgment on 68 key points of the face of a user by using an LSTM (long-short memory neural network);
when the fatigue detection module receives the information matrix which is returned by the image identification module and is [0,1,0,1] or [1,0,0,1], activating an OpenCV + OpenPose part, extracting a human skeleton frame from the input real-time image through a convolutional neural network, as shown in fig. 6, 1 head key point S1, 2 shoulder key points S2, S3, 2 ankle key points S4, S5, 2 crotch key points S6, S7, 2 knee joint key points S8, S9, 2 ankle key points S10, S11, the euclidean distances between the head key point S1 and the shoulder key points S2, S3 are used as the first fatigue judgment, the euclidean distances between the head key point S1 and the ankle key points S4, S5 are used as the second fatigue judgment, the manhattan distances between the 2 knee joint key points S8, S9 are used as the third fatigue judgment, and whether to activate the telescopic robot arm is finally judged according to whether the above-mentioned fatigue judgment conditions are met.
Fig. 3 and 4 are an appearance diagram and an appearance explosion diagram of a device for intelligently adjusting the distance between a person and a screen, which comprise a camera sensor, a telescopic mechanical arm, an image recognition module, a fatigue detection module, a signal processing module and a driving motor controller which are connected; the driving motor controller is respectively associated with the camera sensor, the telescopic mechanical arm, the image recognition module, the fatigue detection module and the signal processing module, when a user appears in front of the camera sensor 17 and the image recognition module and the fatigue detection module monitor the user, the mechanical device of the intelligent adjusting screen enters a working state, immediately transmits image data into the signal processing module to extract and convert image data streams into an effective readable format and transmits the effective readable format into the driving motor controller, then the driving motor controller sends an electric signal command to transmit the electric signal command to the telescopic mechanical arm, and meanwhile, horizontal distance and angle of the screen are adjusted; and the horizontal distance is adjusted, when the support extends forwards, the serial port steering engine 12 swings downwards through the first mechanical arm 4 of the main mechanical arm of the rotary gear and the second mechanical arm 3 of the main mechanical arm, and the crank 9 drives the first connecting rod 8 to press the second connecting rod 15 to swing upwards to push the screen to move forwards. The triangular rotor connected with the No. 2 connecting rod further rotates towards the front direction of the screen to drive the triangular rotor 6 and a third connecting rod 16 connected with the other end to do downward swinging motion to enable the mechanical arm to reach a stretching state, the connecting rod 16 regulates the motion of the two, and further when the main mechanical arm swings downward, the triangular rotor plate connected with the triangular rotor plate rotates clockwise and supports the auxiliary mechanical arm to do upward swinging motion through the bearing, meanwhile, the second connecting rod 15 also moves in the same way to enable the main mechanical arm and the auxiliary mechanical arm to gradually approach to a straight state, so that the purpose of reducing the distance between the screen and a person is achieved, distance adjustment is realized, further, the triangular rotor plate has a certain adjusting effect on the movement of the main mechanical arm and the auxiliary mechanical arm, a certain angle is set for the triangular rotor plate 6 and the third connecting rod 16 of the mechanical arm in an initial state, the height of the screen is ensured not to change in the distance adjusting process, and the support is compressed backwards to show the reverse change of the movement state; the angle adjustment and the rotating platform 3 of the mechanical arm base part can meet the function of freely rotating and adjusting the angle of the mechanical arm. The digital steering engine 11 connected with the bottom of the rotary base and the worm in the rotary base are used as two main driving forces, and when a signal instruction is received, the digital steering engine 11 connected with the bottom of the rotary base drives the rotary shaft to control the mechanical arm to rotate in a worm transmission mode, so that the purpose of adjusting the angle between a screen and a person is achieved.
The drawings described above are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. In addition, the shapes, the proportional sizes, and the like of the respective members in the drawings are merely schematic for assisting the understanding of the present application, and are not particularly limited to the shapes, the proportional sizes, and the like of the respective members in the present application. Those skilled in the art, having the benefit of the teachings of this application, may select various possible shapes and proportional sizes to implement the present application, depending on the particular situation.
The device and the method for intelligently adjusting the distance between a person and a screen generally comprise a camera sensor, a telescopic mechanical arm, an image recognition module, a fatigue detection module, a signal processing module and a driving motor controller which are connected; the driving motor controller is respectively associated with the camera sensor, the telescopic mechanical arm, the image recognition module, the fatigue detection module and the signal processing module.
The camera sensor is mainly a camera or an independent camera of the display screen; the telescopic mechanical arm is connected with the camera sensor and controls the telescopic movement of the module through the driving motor controller.
The telescopic mechanical arm structurally comprises a base 1, a positioning seat 2, a rotating platform 3, a first mechanical arm 4 of a main mechanical arm, a second mechanical arm 5 of the main mechanical arm, a triangular rotating sheet 6, an auxiliary mechanical arm 7, a first connecting rod 8, a crank 9, a display screen 10, a digital steering engine 11, two serial port steering engines 12 and 13, a rotating gear 14, a second connecting rod 15 and a third connecting rod 16; the base 1 is embedded with the positioning seat 2, a digital steering engine 11 is arranged in the base 1, a programmable logic chip is contained in the digital steering engine, and a PWN control signal is sent to the serial port steering engine according to a set program; the positioning seat 2 is connected with the base 1 and the rotary table 3, the rotary table 3 is buckled with the main mechanical arm large arm 4, serial port steering engines 12 and 13 are arranged on the left side and the right side of the rotary table 3, the two serial port steering engines are connected with a digital steering engine 11 in the base through electric wires, the serial port steering engines 12 are used for controlling the motion and the angle of the main mechanical arm first mechanical arm 4, and meanwhile, feedback is provided for the digital steering engines in a current and voltage mode; the first mechanical arm 4 of the main mechanical arm is connected with the triangular rotating plate 6 at one end far away from the rotating platform, the serial port steering engine 13 controls the motion and the angle of the second mechanical arm 5 of the main mechanical arm, the second mechanical arm 5 of the main mechanical arm is connected with the crank 9 at one end close to the serial port steering engine 5, the upper end of the crank 9 is riveted with the first connecting rod 8 through a bearing, and the crank 9 drives the first connecting rod 8 to move; the other two ends of the triangular rotating piece 6 are respectively hinged with the auxiliary mechanical arm 7 and the third connecting rod 16, and the angle of the triangular rotating piece 6 is adjusted to ensure that the auxiliary mechanical arm 7 and the third connecting rod 16 stably extend and retract the display in the movement process; one ends of the auxiliary mechanical arm 7 and the second connecting rod 15, which are far away from the triangular rotating piece 6, are connected with the movable device; the movable device is buckled with the display 10, and a user can move the movable device as required to change the direction of the display.
The image recognition module adopts a pre-trained VGG16 convolutional neural network to judge whether a person or other objects appear in front of the camera, and once the person appears in front of the camera is determined, the image collected by the camera in real time is processed into 128 pixels in size in the background
Figure 368864DEST_PATH_IMAGE002
128 pixels image, 16384 pixels, if the face of the identified user occupies more than 30 pixels
Figure 672807DEST_PATH_IMAGE002
30 pixels or more than 900 pixels are used for monitoring the face of a user in front of the camera and activating a face fatigue detection part in a fatigue detection module; if the face of the identified user occupies less than 30 pixels
Figure 947930DEST_PATH_IMAGE002
30 pixels or less than 900 pixels, the whole body/half body monitoring is carried out on the user in front of the camera, and a whole body/half body fatigue detection part in the fatigue detection module is activated; after the fatigue detection module corresponding to the fatigue detection module is determined to be activated, the darkness of the environment is judged, the judgment is mainly based on the Laplace edge detection theory, and the size of the environment is 128 pixels in the background
Figure 656648DEST_PATH_IMAGE002
The image of 128 pixels is converted into a gray scale image, and 3 is used for the gray scale image
Figure 54131DEST_PATH_IMAGE002
And (3) performing convolution on the Laplace operator to obtain an edge detection graph and calculating the variance of the edge detection graph, defining the environment to be in a bright state when the variance of the edge detection graph is more than 600, defining the environment to be in a dark state when the variance of the edge detection graph is less than 600, and constructing the activation module information and the environment brightness information into a matrix and then transmitting the activation module information and the environment brightness information into the fatigue detection module.
The fatigue detection module consists of an LSTM long and short memory neural network + GAN antagonistic neural network and an OpenCV + OpenPose, wherein the LSTM long and short memory neural network + GAN antagonistic neural network is a facial fatigue detection part in the fatigue detection module, and the OpenCV + OpenPose is a whole body/half body fatigue detection part in the fatigue detection module; when the fatigue detection module receives face detection activation information and an environment brightness information matrix sent back by the image recognition module, only an LSTM long and short memory neural network is activated to observe 68 key points of the face of a user, winks and counts eye key points, a fatigue judgment formula is established by comparing the winks frequency with the image frame number to serve as first fatigue judgment, then the mouth key points are positioned, whether the user is yawning is judged according to the mouth opening and closing degree, the yawning duration is recorded to serve as second fatigue judgment, finally the head key points are positioned, a perpendicular bisector is established by taking the key points of the part of the lower jaw as normal line points, and the included angle between the face and the perpendicular bisector is calculated to serve as third fatigue judgment; when the fatigue detection module receives the face detection activation information and the environment dark information matrix sent back by the image recognition module, the GAN antagonistic neural network is activated at the moment, the details of five sense organs are recovered based on the pre-trained generated model, the low-quality face data obtained by the camera is restored into a high-quality image, and the LSTM long and short memory neural network is used for carrying out fatigue judgment on 68 key points of the face of the user; when the fatigue detection module receives the whole body/half body detection activation information returned by the image recognition module, activating an OpenCV + openpos part, extracting 1 head key point S1, 2 shoulder key points S2, S3, 2 ankle key points S4, S5, 2 crotch key points S6, S7, 2 knee joint key points S8, S9, 2 ankle key points S10, S11 from the input real-time image through a convolutional neural network, taking the euclidean distances of the head key point S1 and the shoulder key points S2, S3 as a first fatigue judgment, the euclidean distances of the head key point S1 and the ankle key points S4, S5 as a second fatigue judgment, and taking the manhattan distances of the 2 key knee joint points S8, S9 as a third fatigue judgment; and finally, dividing the fatigue state into two types, assigning the fatigue state as 1, assigning the non-fatigue state as 0, and transmitting the fatigue information to the signal processing module in a 1 or 0 signal mode.
The signal processing module and the driving motor controller are used for receiving the 1 or 0 signal transmitted by the image recognition module and the fatigue detection module, converting the signal into a driving electric signal readable by the driving motor controller and commanding the driving motor controller to enter a working state; the driving motor controller is composed of a digital steering engine 11 with MG995 model number and serial port motors 12 and 13 of two MG996R, a central processing unit of the driving motor controller is positioned in the digital steering engine in the base 1, the central processing unit controls the movement of the steering engine according to the length of a clock pulse signal, further, the central processing unit can send a signal to the steering engine according to the judgment of the fatigue detection module, and a driving circuit in the steering engine drives the motor to drive the mechanical arm to move after receiving the signal. The digital steering engine 11 controls the two serial port steering engines 12 and 13 simultaneously, the stretching of the mechanical arm is mainly completed by the two serial port steering engines, and the direction adjustment of the mechanical arm is completed by the digital steering engine 11.
As a further improvement of the invention, when the device for intelligently adjusting the distance between the person and the screen is in a working state and observes that the user continuously exists in the front of the display screen for 30 minutes, the driving motor controller and the telescopic mechanical arm start to automatically push and pull the screen to move back and forth in the horizontal direction, and after the device is started to work, the device performs the horizontal direction back and forth in a range of 5-15 minutes according to the fatigue state of the user.
As a further improvement of the invention, the device for intelligently adjusting the distance between the person and the screen immediately enters a working state 30 minutes after monitoring that the person appears in front of the display screen; and the mechanical device of the intelligent adjusting screen immediately stops working after monitoring that 5 minutes before the person leaves the display screen, so that the working efficiency of the mechanical device of the intelligent adjusting screen is ensured.
As a further improvement of the invention, the device for intelligently adjusting the distance between the person and the screen realizes the switching between the intelligent adjustment of the telescopic mechanical arm and the manual adjustment of the telescopic mechanical arm on a software level.
As a further improvement of the invention, the device for intelligently adjusting the distance between the person and the screen adopts a Bluetooth/WIFI connection mode, and the mobile phone App controls automatic and manual gears of the telescopic mechanical arm.

Claims (2)

1. The utility model provides a device of intelligent regulation people and screen distance, includes the sensor of making a video recording of display screen and connection, its characterized in that:
the device also comprises a telescopic mechanical arm, an image recognition module, a fatigue detection module, a signal processing module and a driving motor controller; the telescopic mechanical arm is connected with the camera sensor, and the driving motor controller is used for controlling the telescopic movement of the telescopic mechanical arm; the image recognition module is used for recognizing the existence state of a person in front of a screen and transmitting the recognized result into the drive motor controller; the fatigue detection module is used for identifying human body characteristics and behavior modes and judging whether a user is in a fatigue state; the driving motor controller receives signals transmitted back by the image recognition module and the fatigue detection module and controls the action of the telescopic mechanical arm under the action of the built-in program and the signal processing module;
the driving motor controller comprises a digital steering engine (11), a first serial port steering engine (12) and a second serial port steering engine (13); the digital steering engine (11) adopts an MG995 type digital steering engine, and the first serial port steering engine (12) and the second serial port steering engine (13) both adopt MG996R type serial port motors; a central processing unit of the driving motor controller controls the movement of the steering engine according to the length of a clock pulse signal; the digital steering engine (11) simultaneously controls the first serial port steering engine (12) and the second serial port steering engine (13); the two serial port steering engines are used for controlling the action of the telescopic mechanical arm, and the digital steering engine (11) is used for realizing the direction adjustment of the telescopic mechanical arm;
the telescopic mechanical arm comprises a base (1), a positioning seat (2), a rotating table (3), a first mechanical arm (4) of a main mechanical arm, a second mechanical arm (5) of the main mechanical arm, a triangular rotating sheet (6), an auxiliary mechanical arm (7), a first connecting rod (8), a crank (9), a rotating gear (14), a second connecting rod (15) and a third connecting rod (16);
the base (1) is embedded with the positioning seat (2), the digital steering engine (11) is positioned in the base (1), the digital steering engine comprises a programmable logic chip, and PWN control signals are sent to the first serial port steering engine (12) and the second serial port steering engine (13) according to a set program;
the rotary table (3) is buckled with a first mechanical arm (4) of a main mechanical arm, a first serial port steering engine (12) and a second serial port steering engine (13) are respectively arranged on the left side and the right side of the rotary table (3), and the two serial port steering engines are connected with a digital steering engine (11) through electric wires; the first serial port steering engine (12) is used for controlling the motion and the angle of the first mechanical arm (4) of the main mechanical arm and providing a feedback signal for the digital steering engine through current and voltage signals;
the first mechanical arm (4) of the main mechanical arm is connected with a triangular rotating sheet (6) at one end far away from the rotating table; the second serial port steering engine (13) is used for controlling the motion and the angle of the second mechanical arm (5) of the main mechanical arm; the second mechanical arm (5) of the main mechanical arm is connected with a crank (9); the upper end of the crank (9) is riveted with the first connecting rod (8) through a bearing, and the crank (9) drives the first connecting rod (8) to move;
the two ends of the triangular rotating piece (6) are respectively hinged with the auxiliary mechanical arm (7) and the third connecting rod (16), and the triangular rotating piece (6) is used for adjusting the angle to ensure that the auxiliary mechanical arm (7) and the third connecting rod (16) stably extend and retract the display (10) in the moving process; one ends of the auxiliary mechanical arm (7) and the second connecting rod (15) far away from the triangular rotating piece (6) are buckled with the display (10) to adjust the direction of the display;
the image recognition module works according to the following modes under the action of the built-in program:
adopting a pre-trained VGG16 convolutional neural network to judge whether a person or other objects appear in front of the camera, and processing the image acquired by the camera in real time into 128 pixels in size in the background once the person appears in front of the camera is determined
Figure DEST_PATH_IMAGE002
128 pixels image, 16384 pixels, if the face of the identified user occupies more than 30 pixels
Figure 281109DEST_PATH_IMAGE002
30 pixels or more than 900 pixels are used for monitoring the face of a user in front of the camera and activating a face fatigue detection part in a fatigue detection module; if the face of the identified user occupies less than 30 pixels
Figure 198249DEST_PATH_IMAGE002
30 pixels or less than 900 pixels, the whole body/half body monitoring is carried out on the user in front of the camera, and a whole body/half body fatigue detection part in the fatigue detection module is activated; after the fatigue detection module corresponding to the activation is determined, the brightness of the environment is judged according to the Laplace edge detection theory, namely the size of the environment is 128 pixels in the background
Figure 40304DEST_PATH_IMAGE002
The image of 128 pixels is converted into a gray scale image, and 3 is used for the gray scale image
Figure 500759DEST_PATH_IMAGE002
3, performing convolution on the laplacian operator to obtain an edge detection graph and calculating a variance of the edge detection graph, wherein when the variance of the edge detection graph is more than 600, the environment is defined to be in a bright state, and when the variance of the edge detection graph is less than 600, the environment is defined to be in a dark state; constructing the information of the activation module and the ambient light and shade information into a matrix and then transmitting the matrix into a fatigue detection module;
the fatigue detection module consists of an LSTM long and short memory neural network + GAN antagonistic neural network and an OpenCV + OpenPose, wherein the LSTM long and short memory neural network + GAN antagonistic neural network is a facial fatigue detection part in the fatigue detection module, and the OpenCV + OpenPose is a whole body/half body fatigue detection part in the fatigue detection module; the fatigue detection module works according to the following modes under the action of a built-in program:
when the fatigue detection module receives face detection activation information and an environment brightness information matrix sent back by the image recognition module, only an LSTM long and short memory neural network is activated to observe 68 key points of the face of a user, winks and counts eye key points, a fatigue judgment formula is established by comparing the winks frequency with the image frame number to serve as first fatigue judgment, then the mouth key points are positioned, whether the user is yawning is judged according to the mouth opening and closing degree, the yawning duration is recorded to serve as second fatigue judgment, finally the head key points are positioned, a perpendicular bisector is established by taking the key points of the part of the lower jaw as normal line points, and the included angle between the face and the perpendicular bisector is calculated to serve as third fatigue judgment; when the fatigue detection module receives the face detection activation information and the environment dark information matrix sent back by the image recognition module, the GAN antagonistic neural network is activated at the moment, the details of five sense organs are recovered based on the pre-trained generated model, the low-quality face data obtained by the camera is restored into a high-quality image, and the LSTM long and short memory neural network is used for carrying out fatigue judgment on 68 key points of the face of the user; when the fatigue detection module receives the whole body/half body detection activation information returned by the image recognition module, activating an OpenCV + openpos part, extracting 1 head key point S1, 2 shoulder key points S2, S3, 2 ankle key points S4, S5, 2 crotch key points S6, S7, 2 knee joint key points S8, S9, 2 ankle key points S10, S11 from the input real-time image through a convolutional neural network, taking the euclidean distances of the head key point S1 and the shoulder key points S2, S3 as a first fatigue judgment, the euclidean distances of the head key point S1 and the ankle key points S4, S5 as a second fatigue judgment, and taking the manhattan distances of the 2 key knee joint points S8, S9 as a third fatigue judgment; finally, dividing the fatigue state into two types, assigning the fatigue state as 1 and the non-fatigue state as 0, and transmitting the fatigue information to the signal processing module in a 1 or 0 signal mode;
and the signal processing module is used for receiving the signals of 1 or 0 transmitted back by the image recognition module and the fatigue detection module, converting the signals into driving electric signals readable by the driving motor controller and commanding the driving motor controller to enter a working state.
2. A method for intelligently adjusting the distance between a person and a screen, using the device of claim 1, wherein: when the user continuously exists in the display screen for 30 minutes, the motor controller and the telescopic mechanical arm are driven to automatically push and pull to display to move back and forth in the horizontal direction under the control of a built-in program, and after the display screen is started, the motor controller and the telescopic mechanical arm move back and forth in the horizontal direction for one time in an interval of 5-15 minutes according to the fatigue state of the user; and stopping working immediately after 5 minutes before the monitoring that the person leaves the display screen.
CN202110789453.7A 2021-07-13 2021-07-13 Device and method for intelligently adjusting distance between person and screen Active CN113524182B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110789453.7A CN113524182B (en) 2021-07-13 2021-07-13 Device and method for intelligently adjusting distance between person and screen

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110789453.7A CN113524182B (en) 2021-07-13 2021-07-13 Device and method for intelligently adjusting distance between person and screen

Publications (2)

Publication Number Publication Date
CN113524182A true CN113524182A (en) 2021-10-22
CN113524182B CN113524182B (en) 2023-05-16

Family

ID=78127667

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110789453.7A Active CN113524182B (en) 2021-07-13 2021-07-13 Device and method for intelligently adjusting distance between person and screen

Country Status (1)

Country Link
CN (1) CN113524182B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014158055A1 (en) * 2013-03-25 2014-10-02 Rytik Andrei Petrovich Method for examining and assessing the eye fatigue of a personal computer user
KR20150139745A (en) * 2014-06-04 2015-12-14 윤재웅 Processing method for using smart phone closely to screen
CN107775658A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 Adapt to human comfort automatically adjust screen and face apart from robot
JP2018128640A (en) * 2017-02-10 2018-08-16 富士ゼロックス株式会社 Information processing apparatus, information processing system, and program
CN110096957A (en) * 2019-03-27 2019-08-06 苏州清研微视电子科技有限公司 The fatigue driving monitoring method and system merged based on face recognition and Activity recognition
CN110119672A (en) * 2019-03-26 2019-08-13 湖北大学 A kind of embedded fatigue state detection system and method
US20210369351A1 (en) * 2017-11-01 2021-12-02 Sony Corporation Surgical arm system and surgical arm control system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014158055A1 (en) * 2013-03-25 2014-10-02 Rytik Andrei Petrovich Method for examining and assessing the eye fatigue of a personal computer user
KR20150139745A (en) * 2014-06-04 2015-12-14 윤재웅 Processing method for using smart phone closely to screen
CN107775658A (en) * 2016-08-24 2018-03-09 南京乐朋电子科技有限公司 Adapt to human comfort automatically adjust screen and face apart from robot
JP2018128640A (en) * 2017-02-10 2018-08-16 富士ゼロックス株式会社 Information processing apparatus, information processing system, and program
US20210369351A1 (en) * 2017-11-01 2021-12-02 Sony Corporation Surgical arm system and surgical arm control system
CN110119672A (en) * 2019-03-26 2019-08-13 湖北大学 A kind of embedded fatigue state detection system and method
CN110096957A (en) * 2019-03-27 2019-08-06 苏州清研微视电子科技有限公司 The fatigue driving monitoring method and system merged based on face recognition and Activity recognition

Also Published As

Publication number Publication date
CN113524182B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN109571513B (en) Immersive mobile grabbing service robot system
CN104065862B (en) A kind of information processing method and electronic equipment
CN102323829A (en) Display screen visual angle regulating method and display device
CN101989126A (en) Handheld electronic device and automatic screen picture rotating method thereof
CN108989653A (en) The fatigue driving early-warning device of vehicular adaptive environment light and head pose
CN210295134U (en) Adjustable face recognition display
CN108298084A (en) A kind of armed unmanned plane of autonomous crawl object
CN108995782A (en) A kind of unmanned water rescue device of intelligence and application method based on binocular vision
CN101615033A (en) The angular adjustment apparatus of display module and method
CN106214163A (en) The artificial psychology of a kind of lower limb malformation postoperative straightening rehabilitation teaches device
CN113524182A (en) Device and method for intelligently adjusting distance between person and screen
CN112097055A (en) Display that intelligence goes up and down and rotate and control system thereof
CN103124336A (en) Television with angle adjustable built-in camera
CN211698883U (en) Gesture recognition control switch device based on FPGA
CN111336644A (en) Air conditioner adjusting system based on eyeball drive control
CN214510023U (en) Self-adaptive adjusting table and table control system
CN206460599U (en) A kind of remote control for unmanned boat
CN213420474U (en) Display that intelligence goes up and down and rotate and control system thereof
CN108594861B (en) Camera placing and moving device
CN221121663U (en) Camera device with recognition function
CN111831014A (en) Posture adjustment control method and system of display device
CN116857508A (en) Face capturing and identifying system and method
CN206948472U (en) A kind of multifunctional pick-up head
CN114244989B (en) Control method of intelligent watch with lifting rotary camera
CN208820875U (en) A kind of mobile phone camera with information collection function

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant