CN209980267U - Three-vision sign language recognition device - Google Patents

Three-vision sign language recognition device Download PDF

Info

Publication number
CN209980267U
CN209980267U CN201822059170.3U CN201822059170U CN209980267U CN 209980267 U CN209980267 U CN 209980267U CN 201822059170 U CN201822059170 U CN 201822059170U CN 209980267 U CN209980267 U CN 209980267U
Authority
CN
China
Prior art keywords
camera device
sign language
monocular
axis
monocular camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201822059170.3U
Other languages
Chinese (zh)
Inventor
张晓利
刘欢
邹亚男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inner Mongolia University of Science and Technology
Original Assignee
Inner Mongolia University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inner Mongolia University of Science and Technology filed Critical Inner Mongolia University of Science and Technology
Priority to CN201822059170.3U priority Critical patent/CN209980267U/en
Application granted granted Critical
Publication of CN209980267U publication Critical patent/CN209980267U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The utility model discloses a three-vision sign language recognition device, relating to the technical field of intelligent translation of sign language; the problem that the information to be transmitted by the sign language of the deaf-mute cannot be accurately acquired by an independent gesture recognition scheme is solved; the utility model adds a monocular camera device on the vertical plane of the binocular device to collect the information in the visual blind area, and the information is used as the data complement of the visual blind area of the binocular device; the expression of a sign language expression person, the relative position of a hand on a human body when the sign language is communicated and the sign language gesture recognition are provided to jointly define a group of sign language gestures, so that the number of recognizable gestures is increased by combining the sign language defining modes, and the richness of a sign language library is enlarged; meanwhile, various combined elements refine each sign language, so that the specificity of each sign language is improved, sign language judgment errors caused by similar and similar gestures are avoided, and the accuracy of sign language recognition is improved.

Description

Three-vision sign language recognition device
Technical Field
The utility model relates to an intelligent translation technical field of sign language, concretely relates to three-vision sign language recognition device.
Background
According to the statistics of national data, in China, the number of deaf-mutes among five disabilities, namely disabilities, limb disabilities, intellectual disabilities and the like, is about 2057 thousands, namely, one deaf-mute exists in every 100 persons, wherein about 80 thousands of the deaf-mutes are less than 7 years old. According to the latest data, 0.02 percent of deaf-mutes are born in China every year, and about 1 percent of people in China have hearing impairment calculated for newborn babies with hearing impairment. In life, the group of the deaf-mutes has communication barrier with the outside, the deaf-mutes who have the communication barrier almost determine the future from birth, the circles that the deaf-mutes can live are limited, and the communication and living environments of the deaf-mutes are limited in the world. For example, when the doctor goes to see a doctor in a hospital without accompanying, the doctor has various limitations when explaining the illness state due to the fact that language expression is obstructed and sign language translation is not available; when going to a service window such as a bank, the user cannot communicate with the shop assistant. Sign language, as a highly structured gesture, is an essential means for deaf-mutes to communicate on a daily basis. Sign language recognition is an important component in the field of human-computer interaction, and the research and implementation of the sign language recognition have important academic value and wide application prospect.
The stereo vision is an important branch of computer vision, and the binocular stereo vision is one of the two, two or more images of a measured object are obtained from different positions by using imaging equipment, and the three-dimensional information of the surface points of the object is obtained by calculating the position deviation between corresponding points of the images by using the principle of triangulation, so that the shape or the surface digital three-dimensional shape of the object is finally reconstructed. However, the conventional binocular device has limitations of a visual blind area, as shown in fig. 1, when a recognition plane is perpendicular to a plane of a binocular camera, due to mutual shielding of various parts of an object, the blind area cannot be recognized accurately, and then an independent gesture recognition scheme cannot accurately acquire information to be transmitted by sign language of a deaf-mute, so that the accuracy of sign language recognition of the deaf-mute is reduced.
Disclosure of Invention
The utility model aims to provide a trinocular visual sign language recognition device, which solves the visual blind area defect of binocular equipment and the problem that the independent gesture recognition scheme can not accurately acquire the information to be transmitted by the sign language of the deaf-mute; in order to solve the defect of the vision blind area of the binocular equipment, the utility model adds a monocular camera device on the vertical plane of the binocular equipment to collect the information in the vision blind area, and the information is complemented as the data of the vision blind area of the binocular equipment; aiming at the problem that the existing independent gesture recognition scheme can not accurately acquire the information to be transmitted by sign language of the deaf-mute, the expression of a sign language expression person, the relative position of a hand on the human body when the sign language is communicated and the sign language gesture recognition are provided to jointly define a group of sign language gestures, so that the number of recognizable gestures is increased by combining the sign language defining modes, and the richness of a sign language library is enlarged; meanwhile, various combined elements refine each sign language, so that the specificity of each sign language is improved, sign language judgment errors caused by similar and similar gestures are avoided, and the accuracy of sign language recognition is improved.
The utility model adopts the technical scheme as follows: a binocular visual sign language recognition apparatus, comprising: the system comprises a visual platform, a left camera device, a right camera device, a monocular adjusting bracket, a monocular camera device and image processing equipment;
the visual platform is horizontally arranged, a left camera device and a right camera device are symmetrically arranged on the visual platform, and a monocular adjusting bracket is arranged on a vertical bisector of a connecting line of the centers of lenses of the left camera device and the right camera device; the monocular adjusting bracket comprises a supporting rod, a lead screw mounting seat, a bearing, a crank handle and a lifting nut; the supporting rod is arranged perpendicular to the visual platform, a lifting limiting groove is arranged in front of the supporting rod along the length direction of the supporting rod, and a height scale is arranged on the side surface of the supporting rod; the lead screw is arranged in front of the support rod in parallel, the upper end and the lower end of the lead screw are rotatably arranged on a lead screw mounting seat, a bearing is arranged in the lead screw mounting seat, and the upper end of the lead screw is provided with a crank; the lifting nut is arranged on the screw rod, the rear side surface of the lifting nut is provided with a lifting clamping part, the lifting clamping part can be clamped in the lifting limiting groove in a vertically sliding manner, the front side surface of the lifting nut is provided with a rotating connecting part, and the rotating connecting part is horizontally arrangedThe monocular camera shooting device is horizontally arranged at the front end of the horizontal rotating piece; the lifting clamping part is provided with a height pointer, and the height pointer points to the height scale; the bottom surface of the lifting nut extends out of the protractor, the protractor is parallel to the visual platform, an angle pointer is arranged on the monocular camera device, and the angle pointer points to a scale mark on the protractor; the height pointer is positioned at the same height with the center of the lens of the monocular camera device and indicates the height of the center of the lens of the monocular camera device; the angle pointer is parallel to the central axis of the lens of the monocular camera device and indicates the horizontal rotation angle of the monocular camera device; the central axis of the lens of the monocular camera device is vertical to the vertical plane where the left camera device and the right camera device are located; the left camera device, the right camera device and the monocular camera device jointly establish a virtual three-dimensional coordinate system, wherein a straight line where the central axis of the lens of the monocular camera device is located is an X axis, a straight line which is perpendicular to the visual platform and passes through the middle point of a connecting line of the centers of the lens of the left camera device and the right camera device is a Z axis, a straight line which is perpendicular to an XZ plane and intersected with the X axis and the Y axis is a Y axis, and the intersection point of the X axis, the Y axis and the Z axis is a coordinate origin (X axis)0,Y0,Z0) (ii) a The established virtual three-dimensional coordinate system does not change after being used, namely the positions and angles of the left camera device, the right camera device and the monocular camera device do not change, the monocular adjusting support is used for calibrating before being used each time, and the monocular camera device is debugged to be an initial position, so that during gesture recognition, the gestures displayed by the deaf-mute can be accurately judged as long as the gestures recorded in the gesture model library are reproduced, and recognition errors caused by the change of the positions and angles of the monocular camera device are avoided; when in use, the lifting nut is lifted or lowered by shaking the hand crank clockwise or anticlockwise; the height pointer points to the height scale, so that the adjusted height can be conveniently observed, and the accurate adjustment of the center height of the lens of the monocular camera device is realized; the monocular camera device is provided with an angle pointer, the angle pointer points to the scale mark on the protractor, the angle pointer is parallel to the central axis of the lens of the monocular camera device and indicates the horizontal rotation angle of the monocular camera device, and the horizontal rotation angle is accurately adjusted by observing the scale mark of the angle pointer on the protractor;
the left camera device, the right camera device and the monocular camera device are connected with the image processing equipment, wherein the left camera device and the right camera device form a binocular stereoscopic vision system;
the image processing equipment at least comprises an image acquisition unit, a three-dimensional modeling unit, a gesture model library, a human face expression library, a human hand and human body relative position library, a combined sign language library, a gesture verification unit, a human face expression verification unit, a human hand position verification unit, a combined sign language verification unit, a sign language conversion unit and a sign language output unit;
the gesture model library stores gesture models, and the gesture models are gesture model information which is acquired and input in advance by a left camera device, a right camera device and a monocular camera device and comprise finger joint coordinates and vector data;
storing facial expression pictures in a facial expression library; storing the pictures of the relative positions of the human hand and the human body in a human hand and human body relative position library; and storing the combined sign language in a combined sign language library, wherein each combined sign language is defined by a gesture model, a facial expression picture and a picture of the relative position of the human hand and the human body.
Further, the image processing device is a PC.
Further, the sign language conversion unit converts the combined sign language into characters and sends the characters to the sign language output unit for output; the sign language output unit is a character display.
Further, the sign language conversion unit converts the combined sign language into voice and sends the voice to the sign language output unit for output; the sign language output unit is a voice player.
Further, the three-dimensional modeling unit adopts an Opencv computer vision library, and carries out gesture model establishment according to the depth distance information from the human hand to the left camera device and the right camera device, which is acquired by the left camera device and the right camera device from bottom to top, and the depth distance information from the human hand to the monocular camera device, which is acquired by the monocular camera device from the side.
A multi-information fusion sign language identification method of a three-dimensional visual sign language identification device is characterized by comprising the following steps:
(1) initial position calibration: the height and the angle of the monocular camera device are adjusted through the monocular adjusting bracket, so that the requirements for establishing a virtual three-dimensional coordinate system are met;
(2) the gesture model is input in advance through the left camera device, the right camera device and the monocular camera device, and a gesture model library is established: making and keeping gestures in a virtual three-dimensional coordinate system, and continuously collecting for 10-15 minutes by using a left camera device, a right camera device and a monocular camera device; the left camera device and the right camera device form a binocular stereoscopic vision system, and depth distance information from a human hand to the left camera device and the right camera device is obtained from bottom to top; the monocular camera device acquires depth distance information from hands to the monocular camera device from the side surface, and the depth distance information is used as data complement of a binocular stereoscopic vision system blind area, so that more complete gesture information is extracted, and the binocular vision blind area is eliminated to the greatest extent or completely; inputting data into a mysql database by using java (programming) language to complete gesture model input; the input gesture model information comprises coordinates and vector data of finger joints in a virtual three-dimensional coordinate system;
(3) storing the facial expression pictures into a facial expression library, and storing the pictures of the relative positions of the human hands and the human bodies into a human hand and human body relative position library; storing combined sign languages in a combined sign language library, wherein each combined sign language is defined by a gesture model, a facial expression picture and a picture of the relative position of a human hand and a human body;
(4) the deaf-mute just faces the monocular camera device, and extends a hand into the virtual three-dimensional coordinate system to perform sign language expression;
(5) the left camera device and the right camera device form a binocular stereoscopic vision system, and depth distance information from a human hand to the left camera device and the right camera device is obtained from bottom to top; the monocular camera device acquires depth distance information from hands to the monocular camera device from the side surface, and the depth distance information is used as data complement of a binocular stereoscopic vision system blind area, so that more complete gesture information is extracted, and the binocular vision blind area is eliminated to the greatest extent or completely; in addition, the monocular camera device also acquires a picture of the relative position of the human hand and the human body and a picture of facial expression of the human face;
(6) the image acquisition unit receives information acquired by the left camera device, the right camera device and the monocular camera device at the same moment and respectively sends the information to the three-dimensional modeling unit, the human face expression verification unit and the human hand position verification unit;
the three-dimensional modeling unit establishes a gesture model according to the depth distance information acquired by the three camera devices and sends the established gesture model to the gesture verification unit; the gesture verification unit calls a gesture model stored in the gesture model library and the established gesture model to perform finger joint coordinate and vector data matching, confirms the gesture model with the highest matching degree and sends the gesture model to the combined sign language verification unit;
the facial expression verification unit calls a facial expression picture stored in a facial expression library and a facial expression picture collected by the monocular camera device to perform feature matching, confirms the facial expression picture with the highest matching degree and sends the facial expression picture to the combined sign language verification unit;
the hand position verification unit calls the pictures of the relative positions of the hands and the human bodies stored in the hand and human body relative position library and the pictures of the relative positions of the hands and the human bodies collected by the monocular camera device to perform characteristic matching, confirms the picture of the relative positions of the hands and the human bodies with the highest matching degree and sends the picture to the combined sign language verification unit;
the combined sign language verification unit calls the combined sign language stored in the combined sign language library to compare, confirms the combined sign language and sends the combined sign language to the sign language conversion unit for conversion, and the sign language conversion unit sends the converted combined sign language to the sign language output unit for output.
The beneficial effects of the utility model reside in that: the utility model provides a three-mesh visual sign language recognition device, which solves the visual blind area defect of binocular equipment and the problem that the independent gesture recognition scheme can not accurately acquire the information to be transmitted by the sign language of the deaf-mute; in order to solve the defect of the vision blind area of the binocular equipment, the utility model adds a monocular camera device on the vertical plane of the binocular equipment to collect the information in the vision blind area, and the information is complemented as the data of the vision blind area of the binocular equipment; aiming at the problem that the existing independent gesture recognition scheme can not accurately acquire the information to be transmitted by sign language of the deaf-mute, the expression of a sign language expression person, the relative position of a hand on the human body when the sign language is communicated and the sign language gesture recognition are provided to jointly define a group of sign language gestures, so that the number of recognizable gestures is increased by combining the sign language defining modes, and the richness of a sign language library is enlarged; meanwhile, various combined elements refine each sign language, so that the specificity of each sign language is improved, sign language judgment errors caused by similar and similar gestures are avoided, and the accuracy of sign language recognition is improved.
Drawings
Fig. 1 is a schematic view of the limitations of conventional binocular devices with blind vision zones.
Fig. 2 is a schematic view of the three-dimensional structure of the gesture language recognition device of the present invention.
Fig. 3 is a schematic view of the monocular adjusting bracket of the present invention.
Fig. 4 is a schematic view of the lifting nut of the present invention.
Fig. 5 is a schematic view of the angle indicator structure of the present invention.
Fig. 6 is a flow chart of the multi-information fusion sign language recognition method of the present invention.
Fig. 7 is a schematic representation of the expression of "good" combined sign language in the embodiment of the present invention.
Fig. 8 is a schematic diagram of gesture model information of "right-hand vertical thumb gesture" entered in an embodiment of the present invention.
In the figure: the visual platform comprises a visual platform 1, a left camera device 2, a right camera device 3, a monocular adjusting bracket 4, a monocular camera device 5, a support rod 6, a lead screw 7, a lead screw mounting seat 8, a bearing 9, a crank handle 10, a lifting nut 11, a lifting limiting groove 6-1, a height scale 6-2, a lifting clamping part 11-1, a rotating connecting part 11-2, a horizontal rotating part 11-3, a height pointer 11-4, a protractor 11-5 and an angle pointer 5-1.
Detailed Description
In order to clearly understand the technical solution, purpose and effect of the present invention, the detailed embodiments of the present invention will be described with reference to the accompanying drawings.
In an embodiment, as shown in fig. 7, the definitions "right-hand vertical thumb gesture", "smiling facial expression", "gesture in front of left chest" represent a combined sign language which represents the meaning "good"; the trinocular visual sign language recognition device provided by the utility model is used for judging and recognizing;
(1) initial position calibration: the height and the angle of the monocular camera device are adjusted through the monocular adjusting bracket, so that the requirements of establishing a virtual three-dimensional coordinate system are met: the left camera device, the right camera device and the monocular camera device jointly establish a virtual three-dimensional coordinate system, wherein a straight line where the central axis of a lens of the monocular camera device is located is an X axis, a straight line which is perpendicular to the visual platform and passes through the middle point of a connecting line of the centers of the lenses of the left camera device and the right camera device is a Z axis, a straight line which is perpendicular to an XZ plane and intersected with the X axis and the Y axis is a Y axis, and the intersection point of the X axis, the Y axis and the Z axis is a coordinate origin (X axis)0,Y0,Z0) (ii) a The established virtual three-dimensional coordinate system does not change after being used, namely the positions and angles of the left camera device, the right camera device and the monocular camera device do not change, the monocular adjusting support is used for calibrating before being used each time, and the monocular camera device is debugged to be an initial position, so that during gesture recognition, the gestures displayed by the deaf-mute can be accurately judged as long as the gestures recorded in the gesture model library are reproduced, and recognition errors caused by the change of the positions and angles of the monocular camera device are avoided;
(2) the gesture model of 'the vertical thumb gesture of the right hand' is pre-input through the left camera device, the right camera device and the monocular camera device and is stored in a gesture model library: making a gesture of 'right hand vertical thumb' in front of the left chest in a virtual three-dimensional coordinate system and keeping, and continuously collecting for 10-15 minutes by using a left camera device, a right camera device and a monocular camera device; the left camera device and the right camera device form a binocular stereoscopic vision system, and depth distance information from a human hand to the left camera device and the right camera device is obtained from bottom to top; the thumb is positioned in the binocular vision blind area, the monocular camera device acquires the depth distance information from the thumb to the monocular camera device from the side surface, the depth distance information is used as data complement of the binocular vision system blind area,
therefore, more complete gesture information is extracted, and binocular vision blind areas are eliminated to a great extent or completely; data entry my using java languageCompleting gesture model input in an sql database; the entered gesture model information includes coordinates and vector data of the finger joint in the virtual three-dimensional coordinate system, and as shown in fig. 8, two coordinates (X) at the joint of the front end of the thumb are recorded in the entered gesture model information of the "right-hand vertical thumb gesture1,Y1,Z1),(X2,Y2,Z2) And vector data representing the direction and length of the thumb's anterior joint
Figure DEST_PATH_IMAGE001
(3) Storing the picture of the smiling face in a face expression library, and storing the picture of the gesture in front of the left chest in a relative position library of the hand and the human body; a gesture model of a right-hand vertical thumb gesture, a picture of smiling facial expression and a picture of which the gesture is positioned in front of the left chest are defined as good meanings and stored in a combined sign language library;
(4) the deaf-mute just faces the monocular camera device, and extends the hand into the virtual three-dimensional coordinate system to make a gesture that the gesture of the right hand and the thumb are positioned in front of the left chest again and make a smiling face expression at the same time;
(5) the left camera device and the right camera device form a binocular stereoscopic vision system, and depth distance information from a human hand to the left camera device and the right camera device is obtained from bottom to top; the monocular camera device acquires depth distance information from a human hand to the monocular camera device from the side surface, the depth distance information is used as data complement of a blind area of a binocular stereoscopic vision system, and in addition, the monocular camera device also acquires a gesture picture of a 'right hand vertical thumb gesture' positioned in front of the left chest and a 'face smile expression' picture;
(6) the image acquisition unit receives information acquired by the left camera device, the right camera device and the monocular camera device at the same moment and respectively sends the information to the three-dimensional modeling unit, the human face expression verification unit and the human hand position verification unit;
the three-dimensional modeling unit establishes a gesture model of a right-hand vertical thumb gesture according to the depth distance information acquired by the three camera devices and sends the established gesture model to the gesture verification unit; the gesture verification unit calls a gesture model which is pre-recorded in a gesture model library and the established gesture model to perform finger joint coordinate and vector data matching, confirms the gesture model with the highest matching degree, and sends the confirmed gesture model with the highest matching degree to the combined sign language verification unit; the three-dimensional modeling unit can adopt an Opencv computer vision library to establish a gesture model according to the depth distance information from the human hand to the left camera device and the right camera device, which is acquired by the left camera device and the right camera device from bottom to top, and the depth distance information from the human hand to the monocular camera device, which is acquired by the monocular camera device from the side; the matching of the finger joint coordinates and the vector data is realized by computer programming;
the facial expression verification unit calls a facial expression picture stored in a facial expression library and a facial expression picture collected by the monocular camera device to perform feature matching, confirms the facial expression picture with the highest matching degree and sends the facial expression picture to the combined sign language verification unit; the method can be programmed by adopting a perception hash algorithm in the prior art to realize the searching work of similar pictures, and has the functions of generating a fingerprint character string for each image and then comparing fingerprints of different images; the closer the results, the more similar the images are; this is the prior art and is therefore not described in detail;
the hand position verification unit calls the pictures of the relative positions of the hands and the human bodies stored in the hand and human body relative position library and the pictures of the relative positions of the hands and the human bodies collected by the monocular camera device to perform characteristic matching, confirms the picture of the relative positions of the hands and the human bodies with the highest matching degree and sends the picture to the combined sign language verification unit; the perception hash algorithm in the prior art can be adopted for programming to realize identification;
the combined sign language verification unit calls combined sign languages stored in a combined sign language library to compare, confirms the combined sign languages which simultaneously satisfy the expression of 'right hand vertical thumb gesture', 'face smile expression' and 'gesture in front of left chest', namely confirms the combined sign language with 'good' meaning, and sends the combined sign language to the sign language conversion unit for conversion, the sign language conversion unit converts the combined sign language into characters or voice and sends the characters or voice to the sign language output unit for output, and the sign language output unit adopts a character display and a voice player; the comparison and confirmation process of the combined sign language is realized by computer programming.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes and modifications can be made in the embodiments described in the foregoing embodiments, or equivalent changes and modifications can be made in some of the technical features of the embodiments.

Claims (2)

1. A binocular visual sign language recognition apparatus, comprising: the system comprises a visual platform, a left camera device, a right camera device, a monocular adjusting bracket, a monocular camera device and image processing equipment;
the visual platform is horizontally arranged, a left camera device and a right camera device are symmetrically arranged on the visual platform, and a monocular adjusting bracket is arranged on a vertical bisector of a connecting line of the centers of lenses of the left camera device and the right camera device; the monocular adjusting bracket comprises a supporting rod, a lead screw mounting seat, a bearing, a crank handle and a lifting nut; the supporting rod is arranged perpendicular to the visual platform, a lifting limiting groove is arranged in front of the supporting rod along the length direction of the supporting rod, and a height scale is arranged on the side surface of the supporting rod; the lead screw is arranged in front of the support rod in parallel, the upper end and the lower end of the lead screw are rotatably arranged on a lead screw mounting seat, a bearing is arranged in the lead screw mounting seat, and the upper end of the lead screw is provided with a crank; the lifting nut is arranged on the screw rod, the rear side surface of the lifting nut is provided with a lifting clamping part, the lifting clamping part can be clamped in the lifting limiting groove in a vertically sliding manner, the front side surface of the lifting nut is provided with a rotating connecting part, a horizontal rotating part capable of horizontally rotating left and right is arranged on the rotating connecting part, and the front end of the horizontal rotating part is horizontally provided with the monocular camera device; the lifting clamping part is provided with a height pointer, and the height pointer points to the height scale; the bottom surface of the lifting nut extends out of the protractor, the protractor is parallel to the visual platform, an angle pointer is arranged on the monocular camera device, and the angle pointer points to a scale mark on the protractor; the height pointer is positioned at the same height with the center of the lens of the monocular camera device and indicates the height of the center of the lens of the monocular camera device; the angle pointer is parallel to the central axis of the lens of the monocular camera device and indicates the horizontal rotation angle of the monocular camera device; the central axis of the lens of the monocular camera device is vertical to the vertical plane where the left camera device and the right camera device are located; the left camera device, the right camera device and the monocular camera device jointly establish a virtual three-dimensional coordinate system, wherein a straight line where the central axis of a lens of the monocular camera device is located is an X axis, a straight line which is perpendicular to the visual platform and passes through the middle point of a connecting line of the centers of the lenses of the left camera device and the right camera device is a Z axis, a straight line which is perpendicular to an XZ plane and intersected with the X axis and the Y axis is a Y axis, and the intersection point of the X axis, the Y axis and the Z axis is a coordinate origin;
the left camera device, the right camera device and the monocular camera device are connected with the image processing equipment, wherein the left camera device and the right camera device form a binocular stereoscopic vision system.
2. The apparatus of claim 1, wherein the image processing device is a PC.
CN201822059170.3U 2018-12-10 2018-12-10 Three-vision sign language recognition device Active CN209980267U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201822059170.3U CN209980267U (en) 2018-12-10 2018-12-10 Three-vision sign language recognition device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201822059170.3U CN209980267U (en) 2018-12-10 2018-12-10 Three-vision sign language recognition device

Publications (1)

Publication Number Publication Date
CN209980267U true CN209980267U (en) 2020-01-21

Family

ID=69250363

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201822059170.3U Active CN209980267U (en) 2018-12-10 2018-12-10 Three-vision sign language recognition device

Country Status (1)

Country Link
CN (1) CN209980267U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460748A (en) * 2018-12-10 2019-03-12 内蒙古科技大学 A kind of trinocular vision hand language recognition device and multi-information fusion sign Language Recognition Method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109460748A (en) * 2018-12-10 2019-03-12 内蒙古科技大学 A kind of trinocular vision hand language recognition device and multi-information fusion sign Language Recognition Method
CN109460748B (en) * 2018-12-10 2024-03-01 内蒙古科技大学 Three-dimensional visual sign language recognition device and multi-information fusion sign language recognition method

Similar Documents

Publication Publication Date Title
US10997792B2 (en) Kiosk for viewing of dental treatment outcomes
CN108519676B (en) Head-wearing type vision-aiding device
WO2020042345A1 (en) Method and system for acquiring line-of-sight direction of human eyes by means of single camera
CN105574518B (en) Method and device for detecting living human face
CN110896609B (en) TMS positioning navigation method for transcranial magnetic stimulation treatment
WO2020062773A1 (en) Tms positioning navigation method used for transcranial magnetic stimulation treatment
CN107595388A (en) A kind of near infrared binocular visual stereoscopic matching process based on witch ball mark point
CN109460748B (en) Three-dimensional visual sign language recognition device and multi-information fusion sign language recognition method
TWI557601B (en) A puppil positioning system, method, computer program product and computer readable recording medium
CN209980267U (en) Three-vision sign language recognition device
CN110477921B (en) Height measurement method based on skeleton broken line Ridge regression
CN112184898A (en) Digital human body modeling method based on motion recognition
JP2006285531A (en) Detection device for eye direction, detecting method for eye direction, program for executing the same detecting method for eye direction by computer
JP2005091085A (en) Noncontact type joint angle measuring system
CN113197542B (en) Online self-service vision detection system, mobile terminal and storage medium
CN112656366B (en) Method and system for measuring pupil size in non-contact manner
CN115778333A (en) Method and device for visually positioning cun, guan and chi pulse acupuncture points
CN115294018A (en) Neck dystonia identification system based on RGB-D image
CN112991437B (en) Full-automatic acupuncture point positioning method based on image expansion and contraction technology
CN112052827B (en) Screen hiding method based on artificial intelligence technology
Wu et al. Explore on Doctor's Head Orientation Tracking for Patient's Body Surface Projection Under Complex Illumination Conditions
CN114387154B (en) Three-dimensional airway environment construction method for intubation robot
CN109857252A (en) A kind of method of virtual display image
CN116831560A (en) Human height detection method based on skeleton key point recognition
CN112826441A (en) Interpupillary distance measuring method based on augmented reality technology

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant