CN111766947A - Display method, display device, wearable device and medium - Google Patents
Display method, display device, wearable device and medium Download PDFInfo
- Publication number
- CN111766947A CN111766947A CN202010616492.2A CN202010616492A CN111766947A CN 111766947 A CN111766947 A CN 111766947A CN 202010616492 A CN202010616492 A CN 202010616492A CN 111766947 A CN111766947 A CN 111766947A
- Authority
- CN
- China
- Prior art keywords
- gesture
- image
- determining
- scene
- target area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 51
- 238000004590 computer program Methods 0.000 claims description 12
- 238000004422 calculation algorithm Methods 0.000 claims description 7
- 230000009286 beneficial effect Effects 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 20
- 238000004891 communication Methods 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The application provides a display method, comprising: acquiring a gesture image sent by a gesture camera; acquiring a scene image sent by a photographing camera, wherein the sight lines of the photographing camera and the gesture camera are parallel and the field angle is consistent; and determining the gesture type in the gesture image, and executing preset display operation according to the gesture type and the scene image. Therefore, the gesture camera used for collecting the gesture images is arranged, corresponding preset display operation can be executed on the scene images based on the gesture types in the gesture images, and various requirements of people are met. The application also provides a display device, wearable equipment and a computer readable storage medium, which have the beneficial effects.
Description
Technical Field
The present application relates to the field of display technologies, and in particular, to a display method, a display apparatus, a wearable device, and a computer-readable storage medium.
Background
The AR/VR intelligent wearable device is characterized in that virtual information and a real world are ingeniously fused through the wearable device, multiple technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction and sensing are widely applied, virtual information such as characters, images, three-dimensional models, music and videos generated by a computer is applied to the real world after analog simulation, and the two kinds of information complement each other, so that the real world is enhanced. In the related art, only shot pictures can be virtually displayed during virtual display, and various requirements of people for displaying images cannot be met.
Therefore, how to provide a solution to the above technical problem is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The application aims to provide a display method, a display device, wearable equipment and a computer readable storage medium, which can execute preset display operation according to gesture types and scene images and meet various requirements of people. The specific scheme is as follows:
the application provides a display method, comprising:
acquiring a gesture image sent by a gesture camera;
acquiring a scene image sent by a photographing camera, wherein the sight lines of the photographing camera and the gesture camera are parallel and the field angle is consistent;
and determining the gesture type in the gesture image, and executing preset display operation according to the gesture type and the scene image.
Preferably, the determining a gesture type in the gesture image, and executing a preset display operation according to the gesture type and the scene image includes:
when the gesture type in the gesture image is a first preset gesture, determining a left-hand feature point and a right-hand feature point according to the gesture image;
determining scene points determined in the scene image according to the left-hand and right-hand feature points;
determining a first target area in the scene image based on the scene point and the aspect ratio, and amplifying the first target area.
Preferably, the determining a gesture type in the gesture image, and executing a preset display operation according to the gesture type and the scene image includes:
when the gesture type in the gesture image is a second preset gesture for the first time, determining a left-hand feature point and a right-hand feature point based on the gesture image;
determining scene points determined in the scene image according to the left-hand and right-hand feature points;
determining an initial second target region in the scene image based on the scene point, aspect ratio;
calculating the feature points in the initial second target region using a target tracking algorithm and enlarging the initial second target region.
Preferably, after the enlarging the initial second target region, the method further includes:
acquiring a new scene image;
determining a second target area of the new scene image based on the position information of the feature points in the initial second target area;
magnifying the second target area.
Preferably, after the amplifying the second target region, the method further includes:
if the gesture type in the current gesture image is the second preset gesture and the gesture type in the previous gesture image is not the second preset gesture, determining a new feature point based on the current gesture image;
determining a third target area based on the new feature points,
magnifying the third target area.
Preferably, after the amplifying the second target region, the method further includes:
when the gesture type in the next gesture image is detected to be a first preset gesture, determining a first target area in the current scene image based on the next gesture image and the aspect ratio, and amplifying the first target area.
Preferably, the determining a gesture type in the gesture image, and executing a preset display operation according to the gesture type and the scene image includes:
and when the gesture type in the gesture image is a third preset gesture, clearing all the enlarged images.
The application provides a display device, including:
the gesture image acquisition module is used for acquiring a gesture image sent by the gesture camera;
the scene image acquisition module is used for acquiring a scene image sent by the photographing camera, and the sight lines of the photographing camera and the gesture camera are parallel and the field angle is consistent;
and the gesture type determining and operation executing module is used for determining the gesture type in the gesture image and executing preset display operation according to the gesture type and the scene image.
The application provides a wearable device, including:
the gesture camera is used for acquiring a gesture image;
the shooting camera is used for collecting scene images;
a memory for storing a computer program;
a processor for implementing the steps of the display method as described above when executing the computer program.
A computer-readable storage medium is provided, on which a computer program is stored which, when being executed by a processor, carries out the steps of the display method as described above.
The application provides a display method, comprising: acquiring a gesture image sent by a gesture camera; acquiring a scene image sent by a photographing camera, wherein the sight lines of the photographing camera and the gesture camera are parallel and the field angle is consistent; and determining the gesture type in the gesture image, and executing preset display operation according to the gesture type and the scene image.
Therefore, the gesture camera used for collecting the gesture images is arranged, corresponding preset display operation can be executed on the scene images based on the gesture types in the gesture images, and various requirements of people are met.
The application also provides a display device, wearable equipment and computer readable storage medium simultaneously, all has above-mentioned beneficial effect, and no longer gives details here.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of a display method according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of a gesture camera 200 and a photographing camera 100 provided in an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a first preset gesture provided in an embodiment of the present application;
fig. 4 is a schematic diagram of gesture image and scene image acquisition according to an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a gesture image according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a scene image according to an embodiment of the present application;
FIG. 7 is a diagram illustrating a second preset gesture according to an embodiment of the present disclosure;
FIG. 8 is a schematic diagram of an image display provided by an embodiment of the present application;
FIG. 9 is a schematic diagram of another image display provided by an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a third preset gesture provided in the present application;
fig. 11 is a schematic structural diagram of a display device according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of a wearable device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In the related art, only shot pictures can be virtually displayed during virtual display, and various requirements of people for displaying images cannot be met. Based on the above technical problem, the present embodiment provides a display method, which can execute preset display operations according to gesture types and scene images to meet various requirements of people, specifically referring to fig. 1, where fig. 1 is a flowchart of the display method provided in the embodiment of the present application, and specifically includes:
s101, acquiring a gesture image sent by a gesture camera;
s102, obtaining a scene image sent by a photographing camera, wherein the sight lines of the photographing camera and the gesture camera are parallel and the field angle is consistent;
the photographing camera is arranged at a position far away from the eyes of a user and not shielded by the gestures of the user, such as the top of an AR helmet; the gesture camera is used for collecting gesture images so as to identify gestures of a user and is placed at a position close to eyes of the user. The sight lines of the photographing camera and the gesture camera are kept parallel, and the field angles of the photographing camera and the gesture camera are kept consistent. The position set by the gesture camera is not limited in the embodiment, and the user can set the gesture camera in a user-defined mode as long as the purpose of the embodiment can be achieved. In one implementation, the gesture image is captured when the gesture camera is facing the scene; in another implementation, the gesture image is captured when the gesture camera is facing the face. Referring to fig. 2, fig. 2 is a schematic diagram of a gesture camera 200 and a photographing camera 100 according to an embodiment of the present disclosure.
Further, in order to ensure that the user can start the display method provided by this embodiment as required, the method for reducing the amount of energy, specifically, before acquiring the gesture image sent by the gesture camera, further includes: and acquiring a display instruction. In this embodiment, the acquisition of the display instruction is not limited, and the user can perform the user-defined setting as long as the purpose of this embodiment can be achieved. The display instruction can be obtained after the physical key is detected to be triggered; the display instruction can be obtained after the target keyword is detected to be contained in the voice; or obtaining a display instruction when the preset position is touched. Therefore, the embodiment can save energy consumption by starting the execution of the display method only after the display instruction is obtained.
S103, determining the gesture type in the gesture image, and executing preset display operation according to the gesture type and the scene image.
The purpose of this step is to perform a display operation based on the gesture type. Specifically, whether a gesture exists in the gesture image is judged; if the scene image has the gesture, determining the gesture type of the gesture, matching the gesture type with the built-in display operation according to the gesture type, obtaining and executing the corresponding preset display operation after the matching is successful, and always displaying the current scene image if the matching is unsuccessful; and if no gesture exists, the current scene image is always displayed.
Based on above-mentioned technical scheme, this embodiment is through setting up the gesture camera that is used for gathering the gesture image, can be based on the gesture type in the gesture image, carries out corresponding preset display operation to the scene image, satisfies people's multiple demand.
In an implementable embodiment, for intelligent control of the enlargement of the designated area, step S103 comprises: when the gesture type in the gesture image is a first preset gesture, determining left-hand and right-hand characteristic points according to the gesture image; determining scene points determined in the scene image according to the left-hand and right-hand feature points; a first target area in the scene image is determined based on the scene point, the aspect ratio, and the first target area is enlarged.
Specifically, the first preset gesture is not limited in this embodiment, and as long as the purpose of this embodiment can be achieved, please refer to fig. 3, and fig. 3 is a schematic diagram of a first preset gesture provided in this embodiment of the present application. In this embodiment, the first preset gesture in the gesture image not only serves the purpose of starting the telescope function instruction, but also can determine the left-hand and right-hand feature points according to the gesture image, and specifically, the left-hand and right-hand feature points may be determined according to the gesture in the gesture image, and the scene points determined in the scene image are determined according to the left-hand and right-hand feature points, so as to position the first target area that needs to be enlarged. It can be understood that, when the object to be observed is farther away from the camera, the object is closer in relative position in the gesture image and the scene image, please refer to fig. 4, and fig. 4 is a schematic diagram of gesture image and scene image acquisition provided in the embodiment of the present application. In the present embodiment, when the gesture type in the gesture image is a first preset gesture, the telescopic state is entered, the width of the gesture image is w, the height of the gesture image is h, the width of the scene image is w2, the height of the scene image is h2, and w/h is w2/h 2; and determining left-hand and right-hand characteristic points according to the gesture image, wherein the left-hand and right-hand characteristic points comprise center coordinates (lx, ly) of a left-hand area and center coordinates (rx, ry) of a right-hand area, and because the directions and the angles of view of the gesture camera and the photographing camera are consistent, the positions of the far objects in the pictures shot by the gesture camera and the photographing camera are the same. Referring to fig. 5, fig. 5 is a schematic diagram of a gesture image according to an embodiment of the present application, where positions of (lx, ly) and (rx, ry) in P1(n) are enlarged in equal proportion, and at this time, corresponding scene points (lx2, ly2) and (rx2, ry2) in P2(n) are calculated by using a first formula group, where the first formula group includes:
lx2=lx/w*w2;
ly2=ly/h*h2;
rx2=rx/w*w2;
ry2=ry/h*h2。
determining a first target area in the scene image according to the scene point and the aspect ratio, specifically, calculating to obtain a first target area R (n) for amplification in the scene image according to the scene point and the aspect ratio, wherein R (n) coordinates of the upper left corner of the area (lx2, ly2- (rx2-lx 2)/2/r); r (n) region lower right corner coordinates (rx2, ry2+ (rx2-lx 2)/2/r); r is an aspect ratio r-w/h-w 2/h2, please refer to fig. 6, fig. 6 is a schematic diagram of a scene image provided in this embodiment of the present application, and a first value d, d-rx 2-lx 2/2 is determined according to scene points (lx2, ly2) and (rx2, ry 2); and determining a second numerical value a according to the first numerical value d and the aspect ratio r, determining the upper left corner coordinate and the lower right corner coordinate of the target area according to the second numerical value a, and determining the first target area based on the upper left corner coordinate and the lower right corner coordinate. The r (n) region on P2(n) is enlarged and then projected onto the center of the user's field of view using the "display module". Therefore, the user can see the enlarged image of the area enclosed by the gestures at the center of the visual field. As can be understood from the above conversion, the closer the distance between the hands of the user is, the smaller the area of the region to be enlarged is, and the higher the magnification ratio is after the fixed-size region is displayed. Therefore, the user can adjust the magnification by moving the distance of both hands.
Based on the above technical means, in this embodiment, when the gesture type is the first preset gesture, the enlargement of the first target region in the scene image can be realized based on the left-right hand feature points in the gesture image, and the enlargement magnification can be determined by the distance between the left-right hand feature points.
In another implementable embodiment, to enable tracking of the target object, step S103 includes: when the gesture type in the gesture image is a second preset gesture for the first time, determining left-hand and right-hand feature points based on the gesture image; determining scene points determined in the scene image according to the left-hand and right-hand feature points; determining an initial second target area in the scene image based on the scene points and the aspect ratio; calculating characteristic points in the initial second target area by using a target tracking algorithm, and amplifying the initial second target area to obtain a new scene image; and determining a second target area in the new scene image based on the characteristic points, and amplifying the second target area.
In this embodiment, the second preset gesture is not limited, and the user can set the second preset gesture in a user-defined manner, please refer to fig. 7, where fig. 7 is a schematic diagram of a second preset gesture provided in this embodiment of the present application. If a second preset gesture is detected in the gesture image P1(n) and there is no second preset gesture in a preset number of consecutive gesture images before the gesture image P1(n), it is considered that the second preset gesture is detected for the first time. At this time, the initial second target area s (n) in P2(n) is calculated, the calculation method of the area s (n) is the same as the calculation method of the area r (n), only the gesture is changed from the first preset gesture to the second preset gesture, specifically, the left-right hand feature point is determined according to the gesture image, specifically, the left-right hand feature point is determined according to the gesture in the gesture image, and the scene point determined in the scene image is determined according to the left-right hand feature point, so as to position the initial second target area that needs to be enlarged. The left-hand and right-hand feature points comprise center coordinates (lx, ly) of a left-hand area and center coordinates (rx, ry) of a right-hand area, and scene points (lx2, ly2) and (rx2, ry2) corresponding to P2(n) are calculated by using a first formula group, wherein the first formula group comprises:
lx2=lx/w*w2;
ly2=ly/h*h2;
rx2=rx/w*w2;
ry2=ry/h*h2。
determining an initial second target area in the scene image according to the scene point and the aspect ratio, specifically, calculating to obtain an amplified initial second target area in the scene image according to the scene point and the aspect ratio, and determining a first value d, d being (rx2-lx2)/2 according to the scene points (lx2, ly2) and (rx2, ry 2); determining a second numerical value a according to the first numerical value d and the aspect ratio r, wherein a is d r, determining the upper left corner coordinate (lx2, ly2- (rx2-lx2)/2/r) and the lower right corner coordinate (rx2, ry2+ (rx2-lx2)/2/r) of the tracking area according to the second numerical value a, and determining an initial second target area based on the upper left corner coordinate and the lower right corner coordinate.
Meanwhile, the target tracking algorithm is enabled, and at this time, feature points are searched for in the initial second target region s (n) in P2(n), and the initial second target region is enlarged, then a new scene image is obtained, which is the next scene image, and then the second target region is determined in the new scene image based on the feature points and enlarged. The embodiment does not limit the manner of determining the second target area, and the user can customize the setting as long as the purpose of the embodiment can be achieved. In an achievable implementation mode, a second target area with the characteristic point as the center is determined according to the characteristic point, and the subsequent second target areas are magnified and displayed with the characteristic point as the center; in another realizable embodiment, the second target area is determined based on the position information of the feature point in the initial second target area, and the enlarged display is always performed according to the position information.
Further, in order to track the feature objects corresponding to the feature points, specifically, the determining a second target region in the new scene image based on the feature points includes: and determining a second target area of the new scene image based on the position information of the feature points in the initial second target area.
The feature point a is determined, where the position of a is a (nx, ny), and in a subsequent scene image subsequently captured by the "camera," for example, in P2(m), a new position a (mx, my) of the feature point a is searched, and a second target region t (m) corresponding to a (mx, my) is converted according to the position of a (nx, ny) in the initial second target region s (n), that is, in the t (m) region, the position of a is fixed regardless of whether the gesture position corresponding to the subsequent second preset gesture is changed. Let s (n) have coordinates of (sx1, sy1) at the top left corner and (sx2, sy2) at the bottom right corner, and t (m) have coordinates of (mx-nx + sx1, my-ny + sy1) at the top left corner and coordinates of (mx-nx + sx2, my-ny + sy2) at the bottom right corner, and on the display module, the t (m) region is enlarged and projected in real time to a fixed position on the side of the user's visual field for continuous display, please refer to fig. 8, and fig. 8 is a schematic diagram of image display provided in the embodiment of the present application. Even if the second preset gesture disappears, the area is still fixed on the side of the user visual field to be displayed and updated in real time. Therefore, a moving object containing the characteristic point A is grabbed, and the image T (m) containing the object is displayed on the area with the fixed visual field of the user after the area is enlarged. It can be seen that, in the present embodiment, by determining the second target region based on the position information of the feature point in the initial second target region, the region corresponding to the feature point can be displayed in the user field of view.
Further, after the enlarging the second target area, the method further includes: and when the gesture type in the next gesture image is detected to be a first preset gesture, determining a first target area in the current scene image based on the next gesture image and the aspect ratio, and amplifying the first target area.
That is, at this time, if the first preset gesture is detected again, the first target region of the center of the visual field is displayed at the same time. Specifically, the process of determining the first target area in the current scene image based on the next gesture image and the aspect ratio may include: determining left-hand and right-hand feature points according to the next gesture image; determining scene points determined in the current scene image according to the left-hand and right-hand feature points; and determining a first target area in the current scene image based on the scene point and the aspect ratio, and amplifying the first target area. Please refer to the above embodiments specifically, which will not be described in detail in this embodiment. Specifically, on the display module, the enlarged first target area is below the image position corresponding to the second target area, so that the enlarged image can be distinguished.
Further, after enlarging the second target area, in order to simultaneously track a plurality of moving targets, the method further includes: if the gesture type in the current gesture image is a second preset gesture and the gesture type in the previous gesture image is not the second preset gesture, determining a new feature point based on the current gesture image; and determining a third target area based on the new feature points, and amplifying the third target area.
In an implementation, the third target area is obtained in accordance with the first target area, and the feature point a is selected only when the second preset gesture is detected for the first time. Namely, at the moment when the user just makes the second preset gesture, the user grasps the object between the two hands at the moment and tracks the object in real time. When the second preset gesture appears after disappearing, a new tracking target can be grabbed and displayed in the right area of the visual field of the user in sequence, so that a plurality of moving targets can be tracked simultaneously; referring to fig. 9, fig. 9 is a schematic view of another image display provided in the embodiment of the present application. In another implementable embodiment, the determining the third target region based on the new feature point specifically comprises: determining a third target region based on the new feature points and the feature points, wherein the third target region includes the new feature points and the feature points, and then enlarging the third target region.
In another implementable embodiment, to enable clearing of the enlarged image, step S103 includes: and when the gesture type in the gesture image is a third preset gesture, clearing all the enlarged images.
Referring to fig. 10, fig. 10 is a schematic view illustrating a third preset gesture according to an embodiment of the present disclosure. When a palm-extending gesture is detected in P1(n), all tracked object displays to the right of the user's field of view are cleared.
Referring to fig. 11, fig. 11 is a schematic structural diagram of a display device according to an embodiment of the present disclosure, where the display device described below and the display method described above are referred to in correspondence, and the display device includes:
a gesture image obtaining module 201, configured to obtain a gesture image sent by a gesture camera;
the scene image acquisition module 202 is configured to acquire a scene image sent by the photographing camera, where the sights of the photographing camera and the gesture camera are parallel and the field angles of the photographing camera and the gesture camera are consistent;
the gesture type determining and operation executing module 203 is configured to determine a target area in the scene image according to the gesture image and enlarge the second target area if the gesture image includes the first preset gesture.
Preferably, the gesture type determination and operation execution module 203 comprises:
the first left-right hand feature point determining unit is used for determining left-right hand feature points according to the gesture image when the gesture type in the gesture image is a first preset gesture;
the first scene point determining unit is used for determining scene points determined in the scene image according to the left-hand and right-hand characteristic points;
and the target area determining and amplifying unit is used for determining a first target area in the scene image based on the scene point and the aspect ratio and amplifying the first target area.
Preferably, the gesture type determination and operation execution module 203 comprises:
the second left-right hand feature point determining unit is used for determining left-right hand feature points based on the gesture image when the gesture type in the gesture image is a second preset gesture for the first time;
the second scene point determining unit is used for determining scene points determined in the scene image according to the left-hand and right-hand characteristic points;
an initial second target area determination unit, configured to determine an initial second target area in the scene image based on the scene point and the aspect ratio;
a feature point determination unit for calculating feature points in the initial second target region using a target tracking algorithm and amplifying the initial second target region;
a new scene image acquisition unit for acquiring a new scene image;
and the second target area determining unit is used for determining a second target area in the new scene image based on the characteristic points and amplifying the second target area.
Preferably, the second target region determining unit includes:
and the second target area determining subunit is used for determining a second target area of the new scene image based on the position information of the feature point in the initial second target area.
Preferably, the method further comprises the following steps:
the new feature point determining module is used for determining a new feature point based on the current gesture image if the gesture type in the current gesture image is a second preset gesture and the gesture type in the previous gesture image is not the second preset gesture;
a third target region determination module for determining a third target region based on the new feature points,
and the amplifying module is used for amplifying the third target area.
Preferably, the method further comprises the following steps:
and the second target area amplifying module is used for determining a first target area in the current scene image based on the next gesture image and the aspect ratio and amplifying the first target area when the gesture type in the next gesture image is detected to be the first preset gesture.
Preferably, the gesture type determination and operation execution module 203 comprises:
and the clearing unit is used for clearing all the enlarged images when the gesture type in the gesture image is a third preset gesture.
Since the embodiments of the apparatus portion and the method portion correspond to each other, please refer to the description of the embodiments of the method portion for the embodiments of the apparatus portion, which is not repeated here.
In the following, a wearable device provided in the embodiments of the present application is introduced, and the wearable device described below and the display method described above may be referred to correspondingly. Referring to fig. 12, fig. 12 is a schematic structural diagram of a wearable device according to an embodiment of the present application, including:
the gesture camera 200 is used for acquiring gesture images;
a camera 100 for capturing a scene image;
a memory 300 for storing a computer program;
a processor 400 for implementing the steps of the display method as described above when executing the computer program.
The cameras 100 and 200 are parallel to each other and have the same field angle.
The method can also comprise the following steps: input output interface 500, network port 600. The memory 300 includes a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and computer-readable instructions, and the internal memory provides an environment for the operating system and the computer-readable instructions in the non-volatile storage medium to run. The Processor 400 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field-Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The processor 400 provides the server 101 with computing and control capabilities and may implement the steps of the display method when executing the computer program stored in the memory 300. The input/output interface 500 is used for acquiring computer programs, parameters and instructions imported from the outside, and is controlled by the processor 400 to be stored in the memory 300. The input/output interface 500 may be connected to an input device for receiving parameters or instructions manually input by a user. The input device may be a touch layer covered on a display screen, or a button, a track ball or a touch pad arranged on a terminal shell, or a keyboard, a touch pad or a mouse, etc. Specifically, in this embodiment, the user may start the display method through the input/output interface 500. And a network port 600 for performing communication connection with each external terminal device. The communication technology adopted by the communication connection can be a wired communication technology or a wireless communication technology, such as a mobile high definition link (MHL) technology, a Universal Serial Bus (USB), a High Definition Multimedia Interface (HDMI), a wireless fidelity (WiFi), a bluetooth communication technology, a low power consumption bluetooth communication technology, an ieee802.11 s-based communication technology, and the like. Specifically, in this embodiment, under the condition of normal networking, the authentication may be implemented through interaction between the network port 600 and a mobile phone or a tablet computer.
Since the embodiment of the wearable device portion and the embodiment of the display method portion correspond to each other, please refer to the description of the embodiment of the display method portion for the embodiment of the wearable device portion, which is not repeated here.
The following describes a computer-readable storage medium provided by embodiments of the present application, and the computer-readable storage medium described below and the method described above may be referred to correspondingly.
The present embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the display method as described above.
Since the embodiment of the computer-readable storage medium portion and the embodiment of the method portion correspond to each other, please refer to the description of the embodiment of the method portion for the embodiment of the computer-readable storage medium portion, which is not repeated here.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
A display method, a display apparatus, a wearable device, and a computer-readable storage medium provided by the present application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
Claims (10)
1. A display method, comprising:
acquiring a gesture image sent by a gesture camera;
acquiring a scene image sent by a photographing camera, wherein the sight lines of the photographing camera and the gesture camera are parallel and the field angle is consistent;
and determining the gesture type in the gesture image, and executing preset display operation according to the gesture type and the scene image.
2. The display method according to claim 1, wherein the determining a gesture type in the gesture image, and performing a preset display operation according to the gesture type and the scene image comprises:
when the gesture type in the gesture image is a first preset gesture, determining a left-hand feature point and a right-hand feature point according to the gesture image;
determining scene points determined in the scene image according to the left-hand and right-hand feature points;
determining a first target area in the scene image based on the scene point and the aspect ratio, and amplifying the first target area.
3. The display method according to claim 1, wherein the determining a gesture type in the gesture image, and performing a preset display operation according to the gesture type and the scene image comprises:
when the gesture type in the gesture image is a second preset gesture for the first time, determining a left-hand feature point and a right-hand feature point based on the gesture image;
determining scene points determined in the scene image according to the left-hand and right-hand feature points;
determining an initial second target region in the scene image based on the scene point, aspect ratio;
calculating the feature points in the initial second target region by using a target tracking algorithm, and amplifying the initial second target region;
acquiring a new scene image;
and determining a second target area in the new scene image based on the characteristic points, and amplifying the second target area.
4. The method according to claim 3, wherein the determining a second target region in the new scene image based on the feature point comprises:
determining the second target area of the new scene image based on the position information of the feature points in the initial second target area.
5. The method of claim 4, wherein after said magnifying the second target region, further comprising:
if the gesture type in the current gesture image is the second preset gesture and the gesture type in the previous gesture image is not the second preset gesture, determining a new feature point based on the current gesture image;
determining a third target area based on the new feature points,
magnifying the third target area.
6. The method according to any one of claims 3 to 5, wherein after the enlarging the second target region, further comprising:
when the gesture type in the next gesture image is detected to be a first preset gesture, determining a first target area in the current scene image based on the next gesture image and the aspect ratio, and amplifying the first target area.
7. The display method according to claim 1, wherein the determining a gesture type in the gesture image, and performing a preset display operation according to the gesture type and the scene image comprises:
and when the gesture type in the gesture image is a third preset gesture, clearing all the enlarged images.
8. A display device, comprising:
the gesture image acquisition module is used for acquiring a gesture image sent by the gesture camera;
the scene image acquisition module is used for acquiring a scene image sent by the photographing camera, and the sight lines of the photographing camera and the gesture camera are parallel and the field angle is consistent;
and the gesture type determining and operation executing module is used for determining the gesture type in the gesture image and executing preset display operation according to the gesture type and the scene image.
9. A wearable device, comprising:
the gesture camera is used for acquiring a gesture image;
the shooting camera is used for collecting scene images;
a memory for storing a computer program;
a processor for implementing the steps of the display method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the display method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010616492.2A CN111766947A (en) | 2020-06-30 | 2020-06-30 | Display method, display device, wearable device and medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010616492.2A CN111766947A (en) | 2020-06-30 | 2020-06-30 | Display method, display device, wearable device and medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111766947A true CN111766947A (en) | 2020-10-13 |
Family
ID=72723033
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010616492.2A Pending CN111766947A (en) | 2020-06-30 | 2020-06-30 | Display method, display device, wearable device and medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111766947A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112261428A (en) * | 2020-10-20 | 2021-01-22 | 北京字节跳动网络技术有限公司 | Picture display method and device, electronic equipment and computer readable medium |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150049113A1 (en) * | 2013-08-19 | 2015-02-19 | Qualcomm Incorporated | Visual search in real world using optical see-through head mounted display with augmented reality and user interaction tracking |
US20160140384A1 (en) * | 2014-11-17 | 2016-05-19 | Wistron Corporation | Gesture recognition method and gesture recognition apparatus using the same |
CN106462247A (en) * | 2014-06-05 | 2017-02-22 | 三星电子株式会社 | Wearable device and method for providing augmented reality information |
CN106845335A (en) * | 2016-11-29 | 2017-06-13 | 歌尔科技有限公司 | Gesture identification method, device and virtual reality device for virtual reality device |
CN108496142A (en) * | 2017-04-07 | 2018-09-04 | 深圳市柔宇科技有限公司 | A kind of gesture identification method and relevant apparatus |
CN109032358A (en) * | 2018-08-27 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | The control method and device of AR interaction dummy model based on gesture identification |
CN110442238A (en) * | 2019-07-31 | 2019-11-12 | 腾讯科技(深圳)有限公司 | A kind of method and device of determining dynamic effect |
CN110568929A (en) * | 2019-09-06 | 2019-12-13 | 诺百爱(杭州)科技有限责任公司 | Virtual scene interaction method and device and electronic equipment |
-
2020
- 2020-06-30 CN CN202010616492.2A patent/CN111766947A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150049113A1 (en) * | 2013-08-19 | 2015-02-19 | Qualcomm Incorporated | Visual search in real world using optical see-through head mounted display with augmented reality and user interaction tracking |
CN106462247A (en) * | 2014-06-05 | 2017-02-22 | 三星电子株式会社 | Wearable device and method for providing augmented reality information |
US20160140384A1 (en) * | 2014-11-17 | 2016-05-19 | Wistron Corporation | Gesture recognition method and gesture recognition apparatus using the same |
CN106845335A (en) * | 2016-11-29 | 2017-06-13 | 歌尔科技有限公司 | Gesture identification method, device and virtual reality device for virtual reality device |
CN108496142A (en) * | 2017-04-07 | 2018-09-04 | 深圳市柔宇科技有限公司 | A kind of gesture identification method and relevant apparatus |
CN109032358A (en) * | 2018-08-27 | 2018-12-18 | 百度在线网络技术(北京)有限公司 | The control method and device of AR interaction dummy model based on gesture identification |
CN110442238A (en) * | 2019-07-31 | 2019-11-12 | 腾讯科技(深圳)有限公司 | A kind of method and device of determining dynamic effect |
CN110568929A (en) * | 2019-09-06 | 2019-12-13 | 诺百爱(杭州)科技有限责任公司 | Virtual scene interaction method and device and electronic equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112261428A (en) * | 2020-10-20 | 2021-01-22 | 北京字节跳动网络技术有限公司 | Picture display method and device, electronic equipment and computer readable medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI683259B (en) | Method and related device of determining camera posture information | |
CN111815755B (en) | Method and device for determining blocked area of virtual object and terminal equipment | |
CN109064390B (en) | Image processing method, image processing device and mobile terminal | |
CN108710525B (en) | Map display method, device, equipment and storage medium in virtual scene | |
CN107958480B (en) | Image rendering method and device and storage medium | |
US8773502B2 (en) | Smart targets facilitating the capture of contiguous images | |
CN110471596B (en) | Split screen switching method and device, storage medium and electronic equipment | |
WO2020253655A1 (en) | Method for controlling multiple virtual characters, device, apparatus, and storage medium | |
CN110427110B (en) | Live broadcast method and device and live broadcast server | |
CN111353930B (en) | Data processing method and device, electronic equipment and storage medium | |
CN107213636B (en) | Lens moving method, device, storage medium and processor | |
JP2013521544A (en) | Augmented reality pointing device | |
CN112039937B (en) | Display method, position determination method and device | |
CN112068698A (en) | Interaction method and device, electronic equipment and computer storage medium | |
JP2021531589A (en) | Motion recognition method, device and electronic device for target | |
US11902662B2 (en) | Image stabilization method and apparatus, terminal and storage medium | |
CN110782532B (en) | Image generation method, image generation device, electronic device, and storage medium | |
CN111437604A (en) | Game display control method and device, electronic equipment and storage medium | |
CN112954212B (en) | Video generation method, device and equipment | |
CN112702533B (en) | Sight line correction method and sight line correction device | |
WO2024012268A1 (en) | Virtual operation method and apparatus, electronic device, and readable storage medium | |
CN111766947A (en) | Display method, display device, wearable device and medium | |
US20180059811A1 (en) | Display control device, display control method, and recording medium | |
CN115278084A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN113873159A (en) | Image processing method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |