CN116225236A - Intelligent home scene interaction method based on acousto-optic control - Google Patents
Intelligent home scene interaction method based on acousto-optic control Download PDFInfo
- Publication number
- CN116225236A CN116225236A CN202310497713.2A CN202310497713A CN116225236A CN 116225236 A CN116225236 A CN 116225236A CN 202310497713 A CN202310497713 A CN 202310497713A CN 116225236 A CN116225236 A CN 116225236A
- Authority
- CN
- China
- Prior art keywords
- image
- world coordinates
- knuckle
- dimensional
- pointing angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 230000003993 interaction Effects 0.000 title claims abstract description 17
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000013500 data storage Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 2
- 239000013598 vector Substances 0.000 description 10
- 230000007547 defect Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 210000000707 wrist Anatomy 0.000 description 2
- 230000002950 deficient Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B15/00—Systems controlled by a computer
- G05B15/02—Systems controlled by a computer electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
- G05B19/418—Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control [DNC], flexible manufacturing systems [FMS], integrated manufacturing systems [IMS] or computer integrated manufacturing [CIM]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/20—Pc systems
- G05B2219/26—Pc applications
- G05B2219/2642—Domotique, domestic, home control, automation, smart house
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Automation & Control Theory (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Manufacturing & Machinery (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
An intelligent home scene interaction method based on acousto-optic control comprises the following steps: the user gives out a voice instruction and gesture information; responding to a voice instruction of a user, acquiring 3 images containing gesture information according to a three-dimensional vision system, wherein the three images are respectively: a first image, a second image, and a third image; obtaining the world coordinates of the knuckle according to the 3 images; according to the 3 images, obtaining the three-dimensional pointing angle of the finger; and determining a unique target object pointed by the gesture according to the world coordinates of the knuckle, the three-dimensional pointing angle and the world coordinates of all target objects, and executing a voice instruction on the unique target object. According to the invention, through the three-dimensional vision system, when the three-dimensional pointing angle is obtained, the two-dimensional straight line in each image is directly fitted. Neither the image depth information is required nor the image distortion and other factors are considered. The accuracy of the result is greatly improved.
Description
Technical Field
The invention belongs to the technical field of intelligent home control, and particularly relates to an intelligent home scene interaction method based on acousto-optic control.
Background
In the related art in the field of smart home control, for example, when we control a certain lamp, we need to say the name first and then speak the voice command.
However, this method has drawbacks in that: it is naturally not problematic for the homeowner or young to say that they are of a name. While for guests, especially middle aged and elderly people, on the one hand their memory is reduced; on the other hand, it may simply look at children on weekends or take children, and not live, and it is naturally impossible to remember as many lamp names. Considering that the applicable crowd of the smart home is just the old people crowd with inconvenient legs and feet, a method capable of combining voice instructions to realize efficient and accurate control is needed.
Prior art document 1 (CN 109839827 a) discloses a gesture recognition smart home control system based on full spatial location information, and the method is characterized in that a spatial pointing vector is formed by using the spatial locations of the wrist and the finger tip (i.e. three-dimensional world coordinates) (see paragraph [0053] of the specification). However, at the current image processing level, the space position is calculated by considering not only the influence factors such as camera distortion, but also whether the internal parameters and external parameters of the camera are accurate or not. That is, the determination of the spatial position itself has a large error. In addition, in smarthouses, the ratio of the distance of the target object from the finger tip to the distance of the wrist from the finger tip is much larger than 1, which is known as the spurious one thousand meters. The space direction vector is obtained by using 2 space positions, the error of the space positions is amplified according to the distance ratio, and the accuracy of the result is greatly reduced.
Disclosure of Invention
In order to solve the defects in the prior art, the invention aims to solve the defects, and further provides an intelligent home scene interaction method based on acousto-optic control.
The invention adopts the following technical scheme.
The invention discloses an acousto-optic control-based intelligent home scene interaction method, which comprises the following steps:
step 1, a user gives a voice instruction and gesture information; responding to a voice instruction of a user, acquiring 3 images containing gesture information according to a three-dimensional vision system, wherein the three images are respectively: a first image, a second image, and a third image;
step 2, obtaining world coordinates of the knuckle according to the 3 images;
step 3, obtaining a three-dimensional pointing angle of the finger according to the 3 images;
and 4, determining a unique target object pointed by the gesture according to the world coordinates of the knuckle, the three-dimensional pointing angle and the world coordinates of all target objects, and executing a voice instruction on the unique target object.
The second aspect of the invention discloses an acousto-optic control based intelligent home scene interaction system, which comprises: a three-eye stereoscopic vision system and a central processing unit; wherein, the central processing unit comprises: the device comprises an image processing module, a voice recognition module, a calculation module and a data storage module;
the image processing module is used for responding to a voice instruction of a user, acquiring 3 images containing gesture information according to the three-dimensional vision system, wherein the three images are respectively: a first image, a second image, and a third image; the calculation module is combined to obtain the world coordinates of the knuckle and the three-dimensional pointing angle of the finger;
the data storage module is used for storing world coordinates of all targets;
the computing module is used for determining a unique target object pointed by the gesture according to the world coordinates of the knuckle, the three-dimensional pointing angle and the world coordinates of all target objects;
the voice recognition module is used for executing voice instructions on the unique target object.
Compared with the prior art, the invention has the following advantages:
(1) According to the invention, through the three-dimensional vision system, when the three-dimensional pointing angle is obtained, the two-dimensional straight line in each image is directly fitted. Neither the image depth information is required nor the image distortion and other factors are considered. The accuracy of the result is greatly improved.
(2) The invention also analyzes special conditions so that the gesture information does not find the condition of the target object.
Drawings
Fig. 1 is a schematic illustration of a finger profile.
Fig. 2 is a diagram of the spatial relationship of the first camera to the knuckle.
FIG. 3 is a graph of a gesture versus a target in a particular case.
Fig. 4 is a flowchart of a smart home scenario interaction method based on acousto-optic control.
Detailed Description
The present application is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical solutions of the present invention and are not intended to limit the scope of protection of the present application.
In connection with the background art, it will be appreciated that an image taken by a camera can generally easily determine the position of an object in planar coordinates at the view angle of the image, but it is difficult to determine the position of the object perpendicular to the image, i.e.: image depth information. Typically, at least more than 2 cameras are required to establish three-dimensional position information of the object. However, the position perpendicular to the image must be inaccurate compared to the position of the planar coordinates. Thus, in practice, three-dimensional positional information of the object, it is defective that there must be one dimension (i.e., a dimension perpendicular to the image).
Based on the defects, the invention provides an acousto-optic control-based intelligent home scene interaction method, which comprises the following steps as shown in fig. 4:
step 1, a user gives a voice instruction and gesture information; responding to a voice instruction of a user, acquiring 3 images containing gesture information according to a three-dimensional vision system, wherein the three images are respectively: the first image, the second image and the third image.
The three-eye stereoscopic vision system consists of 3 cameras. Preferably, the 3 cameras are in regular triangle distribution. The 3 cameras should always keep the angles consistent, namely: when 1 camera rotates, other 2 cameras also synchronously rotate.
And 2, obtaining the world coordinates of the knuckle according to the 3 images.
Step 2 essentially allows the world coordinates of the knuckle to be obtained through a binocular vision system, and thus, in some embodiments, step 2 specifically includes:
and 2.1, respectively finding out two-dimensional coordinates of the knuckle on the first image and the second image.
Note that the meaning of the knuckle referred to herein is: a fixed point on the finger. Thus, the world coordinates of the knuckle represent the coordinates of the finger.
And 2.2, calculating the world coordinates of the knuckle according to the two-dimensional coordinates of the knuckle on the first image and the second image.
wherein ,the rotation matrix of the first camera and the second camera are respectively +.>The translation vectors of the first camera and the second camera are respectively; />Is the two-dimensional coordinates of the knuckle on the first image, is>Is the two-dimensional coordinates of the knuckle on the second image; />Representing the actual physical dimensions of a single pixel in the X-axis and Y-axis of the first image, respectively,/->Representing the actual physical dimensions of the individual pixels in the X-axis and Y-axis of the second image, respectively; />The focal lengths of the first camera and the second camera are respectively; />Coordinate values of origin in the pixel coordinate system of the first image, +.>Coordinate values of an origin point in a pixel coordinate system of the second image; />World coordinates for the knuckle; />The rotation matrix and the translation vector of the first camera and the second camera are respectively; />The z-axis components of the world coordinates of the first camera and the second camera, respectively.
It should be noted that, the rotation matrix may be obtained by calibrating internal parameters and external parameters of the camera, and the translation vector is related to the selection of the origin of the pixel coordinate system and the origin of the world coordinate system, both of which are constants.
However, due to the presence of step 3, the solution of the present invention has to use a three-eye stereoscopic system instead of a binocular vision system. Therefore, in the step 2, the three cameras can be used for comprehensive regulation and control, so that higher precision is ensured. Specific details are not described again.
And step 3, obtaining the three-dimensional pointing angle of the finger according to the 3 images.
The step 3 specifically comprises the following steps:
step 3.1, for each image, acquiring a plurality of points on the finger; and fitting according to the plurality of points to obtain the slope of the straight line corresponding to each image.
In some embodiments, step 3.1 may comprise:
step 3.1.1, extracting a region containing a finger from each image, and sharpening the region to obtain a boundary contour of the finger; the boundary contour is divided into a first contour and a second contour by using fingertips as dividing points.
And 3.1.2, sampling the first contour and the second contour to obtain a plurality of points respectively, and obtaining straight lines after fitting the first contour and the second contour respectively by using a least square method.
As shown in FIG. 1, the first contour may be a contour above the finger, the points at which the first contour is sampled may be M1-M6, and the second contour may be a contour below the finger, and the points at which the second contour is sampled may be N1-N5.
And 3.1.3, calculating the slope of the straight line corresponding to each image according to the straight line after the first contour and the second contour are fitted.
It should be noted here that the straight line equation corresponding to each image is assumed to be obtained by fittingThe slope of the straight line is +.>. Furthermore, the->Is not needed to be solved, is the relative displacement, and is completely irrelevant to the three-dimensional pointing angle.
Step 3.1.3 essentially consists in taking the average value. For example, the slopes of the straight line after the first contour fitting and the straight line after the second contour fitting are respectivelyThen the slope of its straight line corresponding to each image +.>The relationship of (2) is as follows:
as can be seen from step 3.1, the invention is precisely that the determination of the image depth information is avoided when the three-dimensional pointing angle of the finger is determined. In addition, each image is independent, and thus it is not necessary to consider factors such as image distortion. Therefore, only two-dimensional information of three images is needed to be considered when the three-dimensional pointing angle is obtained.
And 3.2, calculating the three-dimensional pointing angle of the finger according to the slope of the straight line corresponding to each image.
Specifically, step 3.2 includes:
it will be appreciated that the three-dimensional pointing angle may be defined by a vectorTo represent. The purpose of step 3.2 is then to find +.>These three values.
For ease of illustration, fig. 2 shows a spatial relationship diagram of the first camera and knuckle. In the figure, the direction of the straight line is the three-dimensional pointing angle of the finger. It will be appreciated that the direction of the dashed line is the direction of the finger on the first image. Namely: the dashed line is perpendicular to the dash-dot line.
The straight line equation formed by the fingers is shown below:
Note that: all vectors on fig. 2 do not distinguish between directions.
Taking the first camera in FIG. 2 as an example, the normal vector of the plane formed by the straight line and the dot-dash line is calculated。
The dashed line is the dashed line and normal vector in FIG. 2The straight line direction of the constituted plane, then, the expression of the broken line is:
in step 3.1, the slope of the straight line corresponding to each image is obtained, so that a ternary system of equations can be established according to the following 3-degree simultaneous equations to obtain the slope of each straight lineThese three values.
And 4, determining a unique target object pointed by the gesture according to the world coordinates of the knuckle, the three-dimensional pointing angle and the world coordinates of all target objects, and executing a voice instruction on the unique target object.
It will be appreciated that in smart home systems, the world coordinates of each object are already entered into the system. World coordinates of the knuckleThree-dimensional pointing angle->When all are determined, the world coordinates of each object are judged in turn>. If the conditions are satisfied:
then the object indicated by the gesture is determined.
A common special case is given in fig. 3, considering that the object pointed by the gesture cannot be absolutely accurate, that is, the world coordinates of the object cannot fall exactly on the straight line formed by the world coordinates of the knuckle and the three-dimensional pointing angle. In fig. 3, the target object of the voice command is X, and accordingly, the three-dimensional pointing angle expected by the gesture information is L2, but in reality, the true three-dimensional pointing angle of the gesture information is L1.
In FIG. 3, L3 is a straight line from the world coordinates of the eyes of the person to the world coordinates of the knuckle, the straight line corresponding to a three-dimensional pointing angle of, wherein ,/>Is the world coordinates of the eye, note: the world coordinates of the eye can be obtained in the same way as in step 2. This special case can be generalized as: the straight line L1, the eyes and the target are all on the same straight line.
Based on this, in fig. 3, the desired three-dimensional pointing angleAngle of orientation in true three dimensions->The relationship of (2) should satisfy:
wherein ,respectively representing the weight value and the weight threshold. It will be appreciated that they should each be proportional to the ratio of the eye to knuckle distance to the knuckle to target distance.
Summarizing the above ideas, therefore, in some embodiments, step 4 specifically comprises:
step 4.1, calculating the expected pointing angle corresponding to each object according to the world coordinates of each object and the world coordinates of the knuckle;/>Is the number of the target.
Wherein the desired pointing angle for each target is determined by the following formula:
wherein ,to find the minimum sign +.>Is an absolute value sign. It will be appreciated that, without the second equation,there are numerous solutions. The second expression is therefore aimed at matching by limiting the minimum valueAnd->Normalization.
Step 4.2, if there is a desired pointing angle corresponding to a certain targetAll satisfy the following formula:
the object is determined to be the only object pointed by the gesture.
Correspondingly, the invention also discloses an acousto-optic control-based intelligent home scene interaction system, which comprises: a three-eye stereoscopic vision system and a central processing unit.
Wherein, at least, include in the central processing unit: the device comprises an image processing module, a voice recognition module, a calculation module and a data storage module.
The image processing module is used for responding to a voice instruction of a user, acquiring 3 images containing gesture information according to the three-dimensional vision system, wherein the three images are respectively: a first image, a second image, and a third image; and combining the calculation module to obtain the world coordinates of the knuckle and the three-dimensional pointing angle of the finger.
The data storage module is used for storing world coordinates of all objects.
The computing module is used for determining the unique target object pointed by the gesture according to the world coordinates of the knuckle, the three-dimensional pointing angle and the world coordinates of all target objects.
The voice recognition module is used for executing voice instructions on the unique target object.
While the applicant has described and illustrated the embodiments of the present invention in detail with reference to the drawings, it should be understood by those skilled in the art that the above embodiments are only preferred embodiments of the present invention, and the detailed description is only for the purpose of helping the reader to better understand the spirit of the present invention, and not to limit the scope of the present invention, but any improvements or modifications based on the spirit of the present invention should fall within the scope of the present invention.
Claims (7)
1. The intelligent home scene interaction method based on acousto-optic control is characterized by comprising the following steps of:
step 1, a user gives a voice instruction and gesture information; responding to a voice instruction of a user, acquiring 3 images containing gesture information according to a three-dimensional vision system, wherein the three images are respectively: a first image, a second image, and a third image;
step 2, obtaining world coordinates of the knuckle according to the 3 images;
step 3, obtaining a three-dimensional pointing angle of the finger according to the 3 images;
and 4, determining a unique target object pointed by the gesture according to the world coordinates of the knuckle, the three-dimensional pointing angle and the world coordinates of all target objects, and executing a voice instruction on the unique target object.
2. The intelligent home scene interaction method based on acousto-optic control according to claim 1, wherein the step 2 specifically comprises:
step 2.1, respectively finding out two-dimensional coordinates of the knuckle on the first image and the second image;
and 2.2, calculating the world coordinates of the knuckle according to the two-dimensional coordinates of the knuckle on the first image and the second image.
3. The intelligent home scene interaction method based on acousto-optic control according to claim 1, wherein the step 3 specifically comprises:
step 3.1, for each image, acquiring a plurality of points on the finger; fitting according to the plurality of points to obtain the slope of the straight line corresponding to each image;
and 3.2, calculating the three-dimensional pointing angle of the finger according to the slope of the straight line corresponding to each image.
4. The intelligent home scene interaction method based on acousto-optic control according to claim 3, wherein the step 3.1 specifically comprises:
step 3.1.1, extracting a region containing a finger from each image, and sharpening the region to obtain a boundary contour of the finger; the boundary contour is divided into a first contour and a second contour by taking fingertips as dividing points;
step 3.1.2, sampling the first contour and the second contour to obtain a plurality of points respectively, and obtaining straight lines after fitting the first contour and the second contour respectively by using a least square method;
and 3.1.3, calculating the slope of the straight line corresponding to each image according to the straight line after the first contour and the second contour are fitted.
6. The intelligent home scene interaction method based on acousto-optic control according to claim 1, wherein the step 4 specifically comprises:
step 4.1, calculating the expected pointing angle corresponding to each object according to the world coordinates of each object and the world coordinates of the knuckleThe following formula is shown:
wherein ,to find the minimum sign +.>Is absolute sign, ++>World coordinates of the knuckle>Is the world coordinate of the target,/->For the three-dimensional pointing angle of the finger +.>Is the number of the target;
step 4.2, if there is a desired pointing angle corresponding to a certain targetAll satisfy the following formula:
7. An acousto-optic control based smart home scenario interaction system for performing the method of any of claims 1-6, the system comprising: a three-eye stereoscopic vision system and a central processing unit; wherein, the central processing unit comprises: the device comprises an image processing module, a voice recognition module, a calculation module and a data storage module;
the image processing module is used for responding to a voice instruction of a user, acquiring 3 images containing gesture information according to the three-dimensional vision system, wherein the three images are respectively: a first image, a second image, and a third image; the calculation module is combined to obtain the world coordinates of the knuckle and the three-dimensional pointing angle of the finger;
the data storage module is used for storing world coordinates of all targets;
the computing module is used for determining a unique target object pointed by the gesture according to the world coordinates of the knuckle, the three-dimensional pointing angle and the world coordinates of all target objects;
the voice recognition module is used for executing voice instructions on the unique target object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310497713.2A CN116225236B (en) | 2023-05-06 | 2023-05-06 | Intelligent home scene interaction method based on acousto-optic control |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310497713.2A CN116225236B (en) | 2023-05-06 | 2023-05-06 | Intelligent home scene interaction method based on acousto-optic control |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116225236A true CN116225236A (en) | 2023-06-06 |
CN116225236B CN116225236B (en) | 2023-08-04 |
Family
ID=86573487
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310497713.2A Active CN116225236B (en) | 2023-05-06 | 2023-05-06 | Intelligent home scene interaction method based on acousto-optic control |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116225236B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104246682A (en) * | 2012-03-26 | 2014-12-24 | 苹果公司 | Enhanced virtual touchpad and touchscreen |
CN104978012A (en) * | 2014-04-03 | 2015-10-14 | 华为技术有限公司 | Pointing interactive method, device and system |
CN107241643A (en) * | 2017-08-03 | 2017-10-10 | 沈阳建筑大学 | A kind of multimedia volume adjusting method and system |
CN108229332A (en) * | 2017-12-08 | 2018-06-29 | 华为技术有限公司 | Bone attitude determination method, device and computer readable storage medium |
CN114779922A (en) * | 2022-03-11 | 2022-07-22 | 南京谦萃智能科技服务有限公司 | Control method for teaching apparatus, control apparatus, teaching system, and storage medium |
DE102021105068A1 (en) * | 2021-03-03 | 2022-09-08 | Gestigon Gmbh | METHOD AND SYSTEM FOR HAND GESTURE BASED DEVICE CONTROL |
-
2023
- 2023-05-06 CN CN202310497713.2A patent/CN116225236B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104246682A (en) * | 2012-03-26 | 2014-12-24 | 苹果公司 | Enhanced virtual touchpad and touchscreen |
CN104978012A (en) * | 2014-04-03 | 2015-10-14 | 华为技术有限公司 | Pointing interactive method, device and system |
CN107241643A (en) * | 2017-08-03 | 2017-10-10 | 沈阳建筑大学 | A kind of multimedia volume adjusting method and system |
CN108229332A (en) * | 2017-12-08 | 2018-06-29 | 华为技术有限公司 | Bone attitude determination method, device and computer readable storage medium |
DE102021105068A1 (en) * | 2021-03-03 | 2022-09-08 | Gestigon Gmbh | METHOD AND SYSTEM FOR HAND GESTURE BASED DEVICE CONTROL |
CN114779922A (en) * | 2022-03-11 | 2022-07-22 | 南京谦萃智能科技服务有限公司 | Control method for teaching apparatus, control apparatus, teaching system, and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN116225236B (en) | 2023-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20210233275A1 (en) | Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium | |
CN110568447B (en) | Visual positioning method, device and computer readable medium | |
CN107705333B (en) | Space positioning method and device based on binocular camera | |
CN111862201B (en) | Deep learning-based spatial non-cooperative target relative pose estimation method | |
CN112771573A (en) | Depth estimation method and device based on speckle images and face recognition system | |
KR20180050702A (en) | Image transformation processing method and apparatus, computer storage medium | |
CN108537214B (en) | Automatic construction method of indoor semantic map | |
JP5833507B2 (en) | Image processing device | |
CN108573471B (en) | Image processing apparatus, image processing method, and recording medium | |
CN113256718B (en) | Positioning method and device, equipment and storage medium | |
CN111325798B (en) | Camera model correction method, device, AR implementation equipment and readable storage medium | |
CN113361365B (en) | Positioning method, positioning device, positioning equipment and storage medium | |
CN112102404B (en) | Object detection tracking method and device and head-mounted display equipment | |
US20210334569A1 (en) | Image depth determining method and living body identification method, circuit, device, and medium | |
CN112509036B (en) | Pose estimation network training and positioning method, device, equipment and storage medium | |
JP2017011328A (en) | Apparatus, method and program for image processing | |
CN113362314B (en) | Medical image recognition method, recognition model training method and device | |
CN112083403A (en) | Positioning tracking error correction method and system for virtual scene | |
CN113793370A (en) | Three-dimensional point cloud registration method and device, electronic equipment and readable medium | |
Gao et al. | Marker tracking for video-based augmented reality | |
CN112197708B (en) | Measuring method and device, electronic device and storage medium | |
CN113172636A (en) | Automatic hand-eye calibration method and device and storage medium | |
CN116225236B (en) | Intelligent home scene interaction method based on acousto-optic control | |
JP2006113832A (en) | Stereoscopic image processor and program | |
CN113628284B (en) | Pose calibration data set generation method, device and system, electronic equipment and medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |