CN114063778A - Method and device for simulating image by utilizing AR glasses, AR glasses and medium - Google Patents

Method and device for simulating image by utilizing AR glasses, AR glasses and medium Download PDF

Info

Publication number
CN114063778A
CN114063778A CN202111362864.4A CN202111362864A CN114063778A CN 114063778 A CN114063778 A CN 114063778A CN 202111362864 A CN202111362864 A CN 202111362864A CN 114063778 A CN114063778 A CN 114063778A
Authority
CN
China
Prior art keywords
information
carrier
glasses
environment image
image information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111362864.4A
Other languages
Chinese (zh)
Inventor
刘威
夏勇峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Beehive Century Technology Co ltd
Original Assignee
Beijing Beehive Century Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Beehive Century Technology Co ltd filed Critical Beijing Beehive Century Technology Co ltd
Priority to CN202111362864.4A priority Critical patent/CN114063778A/en
Publication of CN114063778A publication Critical patent/CN114063778A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units

Abstract

The method comprises the steps of carrying out pairing connection with a terminal device corresponding to a user, obtaining environment image information, carrying out carrier identification processing based on the environment image information, controlling and displaying display picture information of the terminal device on a carrier if the carrier is identified, carrying out operation characteristic identification processing based on the environment image information, carrying out gesture information identification based on operation characteristics if the operation characteristics are identified, and sending an operation instruction corresponding to the gesture information to the terminal device based on the gesture information if the gesture information is identified. This application has the effect of being convenient for through AR glasses operation terminal equipment.

Description

Method and device for simulating image by utilizing AR glasses, AR glasses and medium
Technical Field
The present application relates to the field of AR glasses, and in particular, to a method and an apparatus for simulating an image using AR glasses, and a medium.
Background
Augmented Reality (AR) is a technology for calculating the position and angle of a camera image in real time and adding a corresponding image, and is a new technology for seamlessly integrating real world information and virtual world information, and the technology aims to sleeve a virtual world on a screen in the real world and perform interaction. This technique was first proposed in 1990. Along with the improvement of the operational capability of portable electronic products, the application of augmented reality is wider and wider.
The AR glasses are a product produced by the AR technology, information can be displayed on lenses of the AR glasses, but at present, terminal equipment such as a mobile phone cannot be well connected and interacted with the AR glasses, and a user cannot conveniently operate the terminal equipment such as the mobile phone through the AR glasses.
Disclosure of Invention
In order to facilitate the operation of the terminal device through the AR glasses, the application provides a method and a device for simulating an image by utilizing the AR glasses, the AR glasses and a medium.
In a first aspect, the present application provides a method for simulating an image using AR glasses, which adopts the following technical solutions:
a method for simulating an image by using AR glasses comprises
Carrying out pairing connection with terminal equipment corresponding to a user;
acquiring environment image information;
carrying out carrier identification processing based on the environment image information;
if the carrier is identified, controlling to display the display picture information of the terminal equipment on the carrier;
performing operation feature recognition processing based on the environment image information, wherein the operation features comprise hand features;
if the operation characteristics are recognized, recognizing gesture information based on the operation characteristics, wherein the gesture information comprises upward sliding, downward sliding, left sliding, right sliding and clicking;
and if gesture information is recognized, sending an operation instruction corresponding to the gesture information to the terminal equipment based on the gesture information.
By adopting the technical scheme, the AR glasses are firstly connected with the terminal equipment of the user in a matching mode, after the AR glasses are connected with the terminal equipment in a matching mode, the AR glasses acquire environment image information in the range of the lenses of the AR glasses, so that the AR glasses can identify whether a carrier exists in the environment image information, after the AR glasses identify the carrier, the AR glasses display picture information of the terminal equipment of the user on the carrier, the AR glasses identify operation characteristics, such as hand characteristics, from the environment image information, identify gesture information after the AR glasses identify the operation characteristics, judge whether to operate the terminal equipment through identifying the gesture information, and after the AR glasses identify the gesture information, the AR glasses send an operation instruction to the terminal equipment based on the gesture information, so that the terminal equipment operates according to the operation instruction. The carrier is identified, the display picture information of the terminal device is displayed on the carrier, and the gesture information is identified, so that the terminal device operates according to the operation instruction corresponding to the gesture information, and the effect of facilitating the terminal device to be operated through the AR glasses is achieved.
In another possible implementation manner, the performing carrier identification processing based on the environment image information includes:
and inputting the environment image information into a trained first network model for carrier recognition processing, and determining whether a carrier exists in the environment image information based on a carrier recognition result.
By adopting the technical scheme, the trained first network model identifies the carrier of the environmental image information to determine whether the carrier exists in the environmental image information, and the trained first network model identifies the carrier existing in the environmental image information more accurately.
In another possible implementation manner, the performing operation feature identification processing based on the environment image information includes:
and inputting the environment image information into a trained second network model for operation feature recognition processing, and determining whether operation features exist in the environment image information based on an operation feature recognition result.
By adopting the technical scheme, the trained second network model identifies the operation characteristics of the environment image information to determine whether the operation characteristics exist in the environment image information, and the identification of the operation characteristics in the environment image information by the trained second network model is more accurate.
In another possible implementation manner, the performing gesture information recognition based on the operation feature includes:
performing target tracking on the operating characteristics;
generating movement track information of the operation features based on the target tracking result;
comparing the movement track information with at least one preset movement track;
and if the movement track meets any preset movement track, determining gesture information corresponding to any preset movement track.
By adopting the technical scheme, the operation characteristics are subjected to target tracking, so that the movement track information of the operation characteristics can be obtained conveniently, the preset movement track meeting the movement track information is determined by comparing the movement track information with at least one preset movement track, the gesture information corresponding to the preset movement track is further determined, and the gesture information is determined to be more accurate through target tracking.
In another possible implementation manner, the method further includes:
and tracking the carrier, and adjusting the position of the display picture information based on the position of the carrier.
By adopting the technical scheme, the position of the carrier in the environment image information changes, so that the display picture information cannot be well displayed on the carrier, the carrier is subjected to target tracking, and the position of the display picture information is adjusted, so that the display picture information can be always displayed on the carrier, and the display effect is improved.
In another possible implementation manner, the performing the carrier recognition processing based on the environment image information then includes:
if the carrier is not identified, carrying out contour detection on the environment image information;
determining an alternative carrier from the environment image information based on the contour detection result;
and moving the display picture information to the alternative carrier for displaying.
By adopting the technical scheme, if the carrier is not identified in the environment image information, the contour detection is carried out on the environment image information, the alternative carrier is determined according to the contour detection result, and the display picture information is displayed on the alternative carrier by the AR glasses, so that the display picture information can be better displayed when the carrier is not identified.
In another possible implementation manner, the method further includes:
recording display duration information, wherein the display duration information is duration information for displaying the display picture information;
and if the display duration information reaches a preset time threshold, outputting prompt information.
By adopting the technical scheme, the AR glasses record the display time of the display picture information, judge whether the display time information reaches the preset time threshold value, if the display time information reaches the preset time threshold value, the time for the user to operate the terminal equipment through the AR glasses is too long, and the AR glasses output the prompt information to prompt the user, so that the user is prevented from using eyestrain.
In a second aspect, the present application provides an apparatus for simulating an image using AR glasses, which adopts the following technical solutions:
an apparatus for simulating an image using AR glasses, comprising:
the connection module is used for carrying out pairing connection with the terminal equipment corresponding to the user;
the acquisition module is used for acquiring environment image information;
the first identification module is used for carrying out carrier identification processing based on the environment image information;
the display module is used for controlling and displaying the display picture information of the terminal equipment on the carrier when the carrier is identified;
the second identification module is used for carrying out operation characteristic identification processing based on the environment image information, and the operation characteristics comprise hand characteristics;
the third identification module is used for identifying gesture information based on the operation characteristics when the operation characteristics are identified, wherein the gesture information comprises upward sliding, downward sliding, left sliding, right sliding and clicking;
and the sending module is used for sending an operation instruction corresponding to the gesture information to the terminal equipment based on the gesture information when the gesture information is recognized.
By adopting the technical scheme, the connection module is firstly connected with the terminal equipment of the user in a matching way, after the terminal equipment is connected in a matching way, the acquisition module acquires the environmental image information in the range of the AR glasses lens, so that the first identification module identifies whether the carrier exists in the environment image information or not, after the carrier is identified by the first identification module, the display module displays the display picture information of the user terminal device on the carrier, the second recognition module recognizes operation characteristics, such as hand characteristics, from the environment image information, the second recognition module recognizes the operation characteristics, the third recognition module recognizes gesture information, the AR glasses judge whether the terminal device is operated or not by recognizing the gesture information, after the third recognition module recognizes the gesture information, the sending module sends an operation instruction to the terminal device based on the gesture information, so that the terminal device operates according to the operation instruction. The carrier is identified, the display picture information of the terminal device is displayed on the carrier, and the gesture information is identified, so that the terminal device operates according to the operation instruction corresponding to the gesture information, and the effect of facilitating the terminal device to be operated through the AR glasses is achieved.
In another possible implementation manner, when the carrier identification processing is performed based on the environment image information, the first identification module is specifically configured to:
and inputting the environment image information into a trained first network model for carrying out carrier recognition processing, and determining whether a carrier exists in the environment image information based on a carrier recognition result.
In another possible implementation manner, when performing the operation feature recognition processing based on the environment image information, the second recognition module is specifically configured to:
and inputting the environment image information into a trained second network model for operation feature recognition processing, and determining whether operation features exist in the environment image information based on an operation feature recognition result.
In another possible implementation manner, when performing gesture information recognition based on the operation feature, the third recognition module is specifically configured to:
performing target tracking on the operating characteristics;
generating movement track information of the operation features based on the target tracking result;
comparing the movement track information with at least one preset movement track;
and if the movement track meets any preset movement track, determining gesture information corresponding to any preset movement track.
In another possible implementation manner, the apparatus further includes:
and the adjusting module is used for tracking the carrier and adjusting the position of the display picture information based on the position of the carrier.
In another possible implementation manner, the apparatus further includes:
the contour detection module is used for carrying out contour detection on the environment image information when the carrier is not identified;
a determining module, configured to determine an alternative carrier from the environment image information based on a contour detection result;
and the moving module is used for moving the display picture information to the alternative carrier for displaying.
In another possible implementation manner, the apparatus further includes:
the recording module is used for recording display duration information, and the display duration information is duration information for displaying the display picture information;
and the output module is used for outputting prompt information when the display duration information reaches a preset time threshold.
In a third aspect, the present application provides an AR glasses, which adopts the following technical solutions:
AR eyewear, comprising:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: a method of simulating an image using AR glasses according to any one of the possible implementations of the first aspect is performed.
In a fourth aspect, the present application provides a computer-readable storage medium, which adopts the following technical solutions:
a computer-readable storage medium, comprising: there is stored a computer program that can be loaded by a processor and that executes a method of simulating an image using AR glasses, as shown in any one of the possible implementations of the first aspect.
In summary, the present application includes at least one of the following beneficial technical effects:
1. the AR glasses are firstly connected with terminal equipment of a user in a matching mode, after the AR glasses are connected with the terminal equipment in a matching mode, the AR glasses acquire environment image information in the range of lenses of the AR glasses, so that whether carriers exist in the environment image information is identified by the AR glasses, after the carriers are identified by the AR glasses, display picture information of the user terminal equipment is displayed on the carriers by the AR glasses, the AR glasses identify operating characteristics, such as hand characteristics, from the environment image information, the gesture information is identified by the AR glasses, the AR glasses judge whether the terminal equipment is operated or not through identifying the gesture information, after the gesture information is identified by the AR glasses, an operating instruction is sent to the terminal equipment based on the gesture information, and the terminal equipment is enabled to operate according to the operating instruction. The carrier is identified, the display picture information of the terminal equipment is displayed on the carrier, and the gesture information is identified, so that the terminal equipment operates according to an operation instruction corresponding to the gesture information, and the effect of conveniently operating the terminal equipment through AR glasses is achieved;
2. if the carrier is not identified in the environment image information, carrying out contour detection on the environment image information, determining an alternative carrier according to a contour detection result, and displaying the display picture information on the alternative carrier by the AR glasses, so that the display picture information is better displayed when the carrier is not identified.
Drawings
Fig. 1 is a schematic flowchart of a method for simulating an image by using AR glasses according to an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of an apparatus for simulating an image by using AR glasses according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of AR glasses according to an embodiment of the present application.
Detailed Description
The present application is described in further detail below with reference to the attached drawings.
A person skilled in the art, after reading the present specification, may make modifications to the present embodiments as necessary without inventive contribution, but only within the scope of the claims of the present application are protected by patent laws.
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship, unless otherwise specified.
The embodiments of the present application will be described in further detail with reference to the drawings attached hereto.
The present embodiment provides a method for simulating an image using AR glasses, the method being performed by the AR glasses, the AR glasses and a terminal device being directly or indirectly connected through wired or wireless communication, and the method is not limited herein, as shown in fig. 1, the method includes step S101, step S102, step S103, step S104, step S105, step S106, and step S107, wherein,
and S101, performing pairing connection with the terminal equipment corresponding to the user.
In this application embodiment, AR glasses accessible bluetooth is connected with user's terminal equipment, and the user searches AR glasses through terminal equipment, and whether match the connection by the user's selection after searching AR glasses, AR glasses still can be connected with user's terminal equipment through wireless network, and after AR glasses and terminal equipment connected into same wireless network, user's accessible and the supporting software application of AR glasses pair terminal equipment and AR glasses and be connected. After the AR glasses are connected with the terminal device in a pairing mode, other operations can be carried out.
S102, environment image information is obtained.
To this application embodiment, environment image information can be gathered by the camera device that sets up on AR glasses, and the environment image information in AR glasses the place ahead is gathered to the camera device to be convenient for the follow-up environment image information of handling of AR glasses.
And S103, carrying out carrier identification processing based on the environment image information.
For the embodiment of the application, after the AR glasses acquire the environment image information, the environment image information is subjected to carrier identification processing, and the carrier is used for carrying a picture projected by the AR glasses, for example, the carrier may be a wall, a palm, or a sky, which is not limited herein. The AR glasses display the projected picture on the carrier, so that the projected picture can be better displayed.
And S104, if the carrier is identified, controlling and displaying the display screen information of the terminal equipment on the carrier.
For the embodiment of the application, after the AR glasses recognize the carrier, it indicates that the display information of the terminal device can be displayed on the carrier, the display information of the terminal device is transmitted to the AR glasses in a signal manner, and the AR glasses receive the display information sent by the terminal device and display the display information on the carrier.
And S105, performing operation characteristic identification processing based on the environment image information.
Wherein the operational features include hand features.
For the embodiment of the application, in order to enable a user to indirectly operate the terminal device, the AR glasses identify the operation characteristics of the environment image information, where the operation characteristics are characteristics that can control the mobile phone, such as hand characteristics and handwriting pen characteristics. Taking the hand characteristics as an example, the user can operate the terminal device by making corresponding gestures through the hand characteristics, and after the AR glasses identify the hand characteristics, the AR glasses identify whether the hand characteristics make gestures or not to operate the terminal device.
And S106, if the operation characteristics are recognized, performing gesture information recognition based on the operation characteristics.
The gesture information comprises upward sliding, downward sliding, left sliding, right sliding and clicking.
For the embodiment of the application, the hand features are taken as an example, after the AR glasses recognize the hand features, whether the hand features make gestures or not is recognized, and the gestures made by the user are used for representing information of the control terminal device. The AR glasses recognize the gesture information, and then judge whether the gesture information of the control terminal device exists.
And S107, if the gesture information is recognized, sending an operation command corresponding to the gesture information to the terminal equipment based on the gesture information.
For the embodiment of the application, the operation instruction corresponding to the gesture information is stored in the AR glasses in advance, after the AR glasses recognize the gesture information, the AR glasses send the operation instruction corresponding to the gesture information to the terminal device, and the terminal device runs according to the operation instruction after receiving the operation instruction. For example, a user uses a finger to make an upward sliding motion, the AR glasses recognize gesture information of the upward sliding motion of the finger made by the user, the AR glasses determine that an operation instruction corresponding to the gesture information of the upward sliding motion of the finger is "the display screen of the terminal device rolls up or turns over a page", and the terminal device operates according to the operation instruction after receiving the operation instruction.
In a possible implementation manner of the embodiment of the present application, the carrier identification process performed in step S103 based on the environment image information specifically includes step S1031 (not shown in the figure), wherein,
and S1031, inputting the environment image information into the trained first network model for carrying out carrier recognition processing, and determining whether a carrier exists in the environment image information based on a carrier recognition result.
For the embodiment of the present application, the first network model is a neural network model, the first network model may be a convolutional neural network or a cyclic neural network, and the type of the first network model is not limited herein. Before training and learning the initial first network model, a training sample set corresponding to carrier recognition is determined, wherein the training sample set comprises a plurality of carrier pictures and names corresponding to the carrier pictures respectively. For example, two of the training samples are "carrier picture 1, palm" and "carrier picture 2, wall". And inputting the training sample set into the first network model for training and learning to obtain the trained first network model. And inputting the environment image information into a trained first network model, carrying out carrier recognition processing on the environment image information by the trained first network model, and outputting a result whether a carrier exists in the environment image information or not by the trained first network model. The result of whether the carrier exists or not corresponding to the environment image information is obtained more accurately by inputting the environment image information into the trained first network model.
One possible implementation manner of the embodiment of the present application, the determining whether the carrier exists in the environmental image information based on the carrier recognition result in S105 specifically includes S1051 (not shown in the figure), wherein,
s1051, inputting the environment image information into the trained second network model for operation feature recognition processing, and determining whether the environment image information has operation features based on the operation feature recognition result.
For the embodiment of the present application, the second network model is a neural network model, the second network model may be a convolutional neural network or a cyclic neural network, and the type of the second network model is not limited herein. Before training and learning the initial second network model, a training sample set corresponding to the operation characteristic recognition is determined, wherein the training sample set comprises a plurality of operation characteristic pictures and names respectively corresponding to the operation characteristic pictures. For example, three training samples are "operation feature picture 1, index finger tip", "operation feature picture 2, thumb" and "operation feature picture 3, stylus pen". And inputting the training sample set into a second network model for training and learning to obtain the trained second network model. And inputting the environment image information into a trained second network model, carrying out operation characteristic recognition processing on the environment image information by the trained second network model, and outputting a result whether the environment image information has operation characteristics or not by the trained second network model. The result of whether the operation characteristics exist corresponding to the environment image information is obtained more accurately by inputting the environment image information into the trained second network model.
In a possible implementation manner of the embodiment of the present application, the gesture information recognition based on the operation characteristics in S106 specifically includes step S1061 (not shown), step S1062 (not shown), step S1063 (not shown), and step S1064 (not shown), wherein,
s1061, performing target tracking on the operation features.
For the embodiment of the application, for example, the operation characteristic recognized by the AR glasses is the index finger tip, the AR glasses select the index finger tip and track the target, and the position of the index finger tip in the environment image information is tracked in real time, so that the position change of the index finger tip is conveniently judged. Target tracking is prior art and will not be described herein.
S1062, generating movement trace information of the operation feature based on the target tracking result.
For the embodiment of the present application, for example, the environment image information is divided into 100 × 100 sub-areas, taking step S1061 as an example, the initial position of the "index finger tip" is located at the position of the fifty-th column in the first row, the "index finger tip" is subjected to target tracking, the "index finger tip" is moved to the position of the fifty-th column in the fifty-th row, and the result of target tracking on the "index finger tip" is that the "index finger tip" is moved from the upper side of the environment image information to the lower side of the environment image information, that is, the movement trajectory information is that the "index finger tip" is moved from the position of the fifty-th column in the first row to the position of the fifty-th column in the fifty-th row.
And S1063, comparing the movement track information with at least one preset movement track.
For the embodiment of the application, at least one preset movement track is stored in the AR glasses in advance, for example, the preset movement track includes that the operation feature moves from top to bottom, the operation feature moves from bottom to top, the operation feature moves from left to right, and the operation feature moves from right to left. Taking step S1062 as an example, the movement trace information of the "index finger tip" is substantially moved from top to bottom. The AR glasses compare the moving track information of the 'index finger tip' with the four preset moving tracks, so that the preset moving track which is consistent with the 'finger tip' moving track information is determined.
In other embodiments, the movement trajectory information of the operation feature may be generated according to the size change of the area occupied by the "index finger tip" in the environment image information, assuming that the area occupied by the "index finger tip" at a certain time is 10 sub-areas, after the target tracking is performed on the "index finger tip", and then the area occupied by the "index finger tip" at a certain time is 5 sub-areas, it is explained that the "index finger tip" moves toward the carrier, and the "index finger tip" moves away from the AR glasses, so that the area occupied by the "index finger tip" in the environment image information is reduced, and it may be determined that the movement trajectory of the "index finger tip" is "movement close to the carrier".
And S1064, if the movement track meets any preset movement track, determining gesture information corresponding to any preset movement track.
For the embodiment of the application, gesture information corresponding to each preset movement track is stored in the AR glasses in advance, for example, "the operation feature moves from top to bottom" corresponds to "slide up" gesture information, "the operation feature moves from bottom to top" corresponds to "turn down" gesture information, "the operation feature moves from left to right" corresponds to "slide left" gesture information, "the operation feature moves from right to left" corresponds to "slide right" gesture information, "and" the approach carrier moves "corresponds to" click. Taking step S1063 as an example, if the movement trajectory information of the "fingertip" coincides with the preset movement trajectory of the "operation feature moves from top to bottom", it is determined that the current gesture information of the user is "slide up".
In one possible implementation manner of the embodiment of the present application, the method further includes a step S108 (not shown in the figure), wherein,
and S108, tracking the carrier, and adjusting the position of the display screen information based on the position of the carrier.
For the embodiment of the application, the position of the carrier in the environment image information changes along with the change of the user posture. It is assumed that the carrier is a square wall, and the square wall occupies sub-areas in the environment image information between the twentieth row and the fifth column, the twentieth row and the fifty-th column, the sixty row and the fifth column, and the sixty row and the fifty-th column. The central position is a subregion of a forty-th row and a twenty-eighth column. The center position of the display picture information is also positioned in the sub-area of the forty-fourth row and the twenty-eighth column, so that the display picture information is positioned in the center position of the square wall in the visual angle of the user, and the display picture information is positioned at the optimal viewing visual angle.
And after recognizing the square wall, the AR glasses mark the central position of the square wall, and track the target of the central position of the square wall. After the posture of the user changes, the center position of the square wall changes in the environment image information, and it is assumed that the center position of the square wall changes to an area in the fifty-th row and the twenty-eighth column. The AR glasses adjust the position of the display screen information, and the AR glasses adjust the center position of the display screen information to the position of the twenty-eighth column of the fifty-fifth row in the same direction. Therefore, after the position of the carrier is changed, the user can still watch the display picture information with a better watching visual angle.
In a possible implementation manner of the embodiment of the present application, step S103 further includes step S109 (not shown), step S110 (not shown), and step S111 (not shown), wherein,
and S109, if the carrier is not identified, carrying out contour detection on the environment image information.
For the embodiment of the application, if the carrier is not identified in the environment image information by the AR glasses, the alternative carrier needs to be determined again in the environment image information to receive the display screen information of the terminal device. And the AR glasses perform contour detection on the acquired environment image information.
Since the colors of the respective portions in the ambient image information are greatly different. The alternative carrier is thus determined more accurately by means of contour detection in image processing. For example, the color of the sky in the environment image information is blue, and the color of the road in the environment image information is gray. The AR glasses preprocess the environment image information, smooth filtering processing is conducted through a two-dimensional Gaussian template, so that partial noise in the environment image information can be filtered, edge detection processing is conducted on the environment image information after filtering processing, and a preliminary edge response image of a sky area is obtained, wherein feature information which can be distinguished by gradients, such as blue, gray and the like in the environment image information is involved. And finally, further processing the edge response image to obtain a better edge response image. The boundary between the sky region and the road region is detected by the contour detection method, so that the contour of the sky and the road can be more clearly determined by the image processing method.
And S110, determining an alternative carrier from the environment image information based on the contour detection result.
For the embodiment of the present application, taking step S109 as an example, the AR glasses identify the sky region and the road region from the environment image information, determine that the candidate carrier may be obtained by comparing the size of the sky region with the size of the road region, and if the sky region is larger than the road region, take the sky region as the candidate carrier. In other embodiments, the AR glasses may further select a region with a regular contour as the candidate carrier.
And S111, moving the display screen information to an alternative carrier for displaying.
For the embodiment of the present application, taking step S110 as an example, after the AR glasses determine the sky as the alternative carrier, the AR glasses determine the center position of the sky area, and assuming that the center position of the sky is a sub-area in the thirtieth row and the fifty th column, the AR glasses move the center position of the display screen information of the terminal device to the sub-area in the thirtieth row and the fifty th column. Thereby enabling a better display of the display information on the "sky" carrier.
In a possible implementation manner of the embodiment of the present application, step S104 further includes step S112 (not shown in the figure) and step S113 (not shown in the figure), wherein,
and S112, recording display duration information.
Wherein, the display duration information is duration information for displaying the display picture information
For the embodiment of the application, the AR glasses start timing when the display picture information is projected to the carrier for display. Let the time when the display information is displayed on the carrier be "2021.10.28; 15:00". The AR glasses start timing from the time, so that the time length for the user to watch the display picture information is recorded conveniently.
And S113, if the display duration information reaches a preset time threshold, outputting prompt information.
For the embodiment of the present application, assuming that the preset time threshold is 1h, taking step S112 as an example, the AR glasses start timing from the above time, and when the time reaches "2021.10.28; and when the time is 16: 00', the AR glasses detect that the display duration information reaches 1h, and the AR glasses output prompt information to prompt the user to use the eyes for too long time. The prompt information output by the AR glasses may be "you spend too long" text information projected on the carrier, or a speaker device may be disposed on the AR glasses, and the AR glasses control the speaker device to emit "you spend too long" voice information, which is not limited herein.
The above embodiments describe a method for simulating an image by using AR glasses from the perspective of a method flow, and the following embodiments describe an apparatus for simulating an image by using AR glasses from the perspective of a virtual module or a virtual unit, which will be described in detail in the following embodiments.
The embodiment of the present application provides an image simulation apparatus using AR glasses, as shown in fig. 2, the image simulation apparatus using AR glasses 20 may specifically include:
an apparatus for simulating an image using AR glasses 20, comprising:
a connection module 201, configured to perform pairing connection with a terminal device corresponding to a user;
an obtaining module 202, configured to obtain environment image information;
a first identification module 203, configured to perform carrier identification processing based on the environment image information;
a display module 204, configured to control display screen information of the display terminal device on the carrier when the carrier is identified;
a second identification module 205, configured to perform operation feature identification processing based on the environment image information, where the operation feature includes a hand feature;
the third recognition module 206 is configured to, when the operation feature is recognized, perform gesture information recognition based on the operation feature, where the gesture information includes a slide-up, a slide-down, a slide-left, a slide-right, and a click;
and a sending module 207, configured to send, based on the gesture information, an operation instruction corresponding to the gesture information to the terminal device when the gesture information is recognized.
For the embodiment of the present application, the connection module 201 performs pairing connection with the terminal device of the user, after the terminal device is connected in a pairing manner, the obtaining module 202 obtains the environmental image information in the range of the lens of the AR glasses, so that the first recognition module 203 recognizes whether the carrier exists in the environment image information, and after the first recognition module 203 recognizes the carrier, the display module 204 displays the display picture information of the user terminal device on the carrier, the second recognition module 205 recognizes operation features, such as hand features, from the environment image information, the third recognition module 206 recognizes gesture information after the second recognition module 205 recognizes the operation features, whether the terminal device is operated is judged by recognizing the gesture information, and after the gesture information is recognized by the third recognition module 206, the sending module 207 sends an operation instruction to the terminal device based on the gesture information, so that the terminal device operates according to the operation instruction. The carrier is identified, the display picture information of the terminal device is displayed on the carrier, and the gesture information is identified, so that the terminal device operates according to the operation instruction corresponding to the gesture information, and the effect of facilitating the terminal device to be operated through the AR glasses is achieved.
In a possible implementation manner of the embodiment of the present application, when the first identification module 203 performs carrier identification processing based on environment image information, it is specifically configured to:
and inputting the environmental image information into the trained first network model for carrier recognition processing, and determining whether a carrier exists in the environmental image information based on a carrier recognition result.
In a possible implementation manner of the embodiment of the present application, when the second identifying module 205 performs the operation feature identifying processing based on the environment image information, the second identifying module is specifically configured to:
and inputting the environmental image information into the trained second network model for operation feature recognition processing, and determining whether the environmental image information has operation features or not based on the operation feature recognition result.
In a possible implementation manner of the embodiment of the present application, when the third recognition module 206 performs gesture information recognition based on the operation features, it is specifically configured to:
performing target tracking on the operation characteristics;
generating movement track information of the operation features based on the target tracking result;
comparing the movement track information with at least one preset movement track;
and if the movement track meets any preset movement track, determining gesture information corresponding to any preset movement track.
In a possible implementation manner of the embodiment of the present application, the apparatus 20 further includes:
and the adjusting module is used for tracking the carrier and adjusting the position of the display picture information based on the position of the carrier.
In a possible implementation manner of the embodiment of the present application, the apparatus 20 further includes:
the contour detection module is used for carrying out contour detection on the environment image information when the carrier is not identified;
a determining module, configured to determine an alternative carrier from the environment image information based on the contour detection result;
and the moving module is used for moving the display picture information to the alternative carrier for displaying.
In a possible implementation manner of the embodiment of the present application, the apparatus 20 further includes:
the recording module is used for recording display duration information which is duration information for displaying the display picture information;
and the output module is used for outputting prompt information when the display duration information reaches a preset time threshold.
For the embodiment of the present application, the first identification module 203, the second identification module 205, and the third identification module 206 may be the same identification module, may be different identification modules, or may be partially the same identification module.
The embodiment of the present application provides an apparatus 20 for simulating an image by using AR glasses, which is suitable for the above method embodiments and is not described herein again.
In the embodiment of the present application, there is provided AR glasses, as shown in fig. 3, the AR glasses 30 shown in fig. 3 include: a processor 301 and a memory 303. Wherein processor 301 is coupled to memory 303, such as via bus 302. Optionally, the AR glasses 30 may also include a transceiver 304. It should be noted that the transceiver 304 is not limited to one in practical application, and the structure of the AR glasses 30 does not limit the embodiment of the present application.
The Processor 301 may be a CPU (Central Processing Unit), a general-purpose Processor, a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an FPGA (Field Programmable Gate Array) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 301 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 302 may include a path that transfers information between the above components. The bus 302 may be a PCI (Peripheral Component Interconnect) bus, an EISA (Extended Industry Standard Architecture) bus, or the like. The bus 302 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 3, but this does not mean only one bus or one type of bus.
The Memory 303 may be a ROM (Read Only Memory) or other type of static storage device that can store static information and instructions, a RAM (Random Access Memory) or other type of dynamic storage device that can store information and instructions, an EEPROM (Electrically Erasable Programmable Read Only Memory), a CD-ROM (Compact Disc Read Only Memory) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), a magnetic Disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these.
The memory 303 is used for storing application program codes for executing the scheme of the application, and the processor 301 controls the execution. The processor 301 is configured to execute application program code stored in the memory 303 to implement the aspects illustrated in the foregoing method embodiments.
The AR glasses illustrated in fig. 3 are only one example and should not impose any limitations on the functionality and scope of use of the disclosed embodiments.
The present application provides a computer-readable storage medium, on which a computer program is stored, which, when running on a computer, enables the computer to execute the corresponding content in the foregoing method embodiments. Compared with the prior art, in the embodiment of the application, the AR glasses are firstly connected with the terminal device of the user in a matching mode, after the AR glasses are connected with the terminal device in a matching mode, the AR glasses acquire environment image information in the range of lenses of the AR glasses, so that the AR glasses can identify whether a carrier exists in the environment image information, after the AR glasses identify the carrier, the AR glasses display the display picture information of the terminal device of the user on the carrier, the AR glasses identify operation features, such as hand features, from the environment image information, identify gesture information after the AR glasses identify the operation features, judge whether to operate the terminal device through identifying the gesture information, and after the AR glasses identify the gesture information, the AR glasses send operation instructions to the terminal device based on the gesture information, so that the terminal device operates according to the operation instructions. The carrier is identified, the display picture information of the terminal device is displayed on the carrier, and the gesture information is identified, so that the terminal device operates according to the operation instruction corresponding to the gesture information, and the effect of facilitating the terminal device to be operated through the AR glasses is achieved.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. A method for simulating an image using AR glasses, comprising:
carrying out pairing connection with terminal equipment corresponding to a user;
acquiring environment image information;
carrying out carrier identification processing based on the environment image information;
if the carrier is identified, controlling to display the display picture information of the terminal equipment on the carrier;
performing operation feature recognition processing based on the environment image information, wherein the operation features comprise hand features;
if the operation characteristics are recognized, recognizing gesture information based on the operation characteristics, wherein the gesture information comprises upward sliding, downward sliding, left sliding, right sliding and clicking;
and if gesture information is recognized, sending an operation instruction corresponding to the gesture information to the terminal equipment based on the gesture information.
2. The method for simulating an image by using AR glasses according to claim 1, wherein the performing a carrier recognition process based on the environment image information comprises:
and inputting the environment image information into a trained first network model for carrying out carrier recognition processing, and determining whether a carrier exists in the environment image information based on a carrier recognition result.
3. The method of claim 1, wherein the performing an operation feature recognition process based on the environment image information comprises:
and inputting the environment image information into a trained second network model for operation feature recognition processing, and determining whether operation features exist in the environment image information based on an operation feature recognition result.
4. The method for simulating an image by using AR glasses according to claim 1, wherein the recognizing gesture information based on the operation features comprises:
performing target tracking on the operating characteristics;
generating movement track information of the operation features based on the target tracking result;
comparing the movement track information with at least one preset movement track;
and if the movement track meets any preset movement track, determining gesture information corresponding to any preset movement track.
5. The method for simulating an image using AR glasses according to claim 1, wherein the method further comprises:
and tracking the carrier, and adjusting the position of the display picture information based on the position of the carrier.
6. The method for simulating an image by using AR glasses according to claim 1, wherein the carrier recognition process is performed based on the environment image information, and then the method comprises:
if the carrier is not identified, carrying out contour detection on the environment image information;
determining an alternative carrier from the environment image information based on the contour detection result;
and moving the display picture information to the alternative carrier for displaying.
7. The method for simulating an image using AR glasses according to claim 1, wherein the method further comprises:
recording display duration information, wherein the display duration information is duration information for displaying the display picture information;
and if the display duration information reaches a preset time threshold, outputting prompt information.
8. An apparatus for simulating an image using AR glasses, comprising:
the connection module is used for carrying out pairing connection with the terminal equipment corresponding to the user;
the acquisition module is used for acquiring environment image information;
the first identification module is used for carrying out carrier identification processing based on the environment image information;
the display module is used for controlling and displaying the display picture information of the terminal equipment on the carrier when the carrier is identified;
the second identification module is used for carrying out operation characteristic identification processing based on the environment image information, and the operation characteristics comprise hand characteristics;
the third recognition module is used for recognizing gesture information based on the operation characteristics when the operation characteristics are recognized;
and the sending module is used for sending an operation instruction corresponding to the gesture information to the terminal equipment based on the gesture information when the gesture information is recognized.
9. AR eyewear, characterized in that it comprises:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: performing a method of simulating an image using AR glasses according to any one of claims 1 to 7.
10. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing a method of simulating an image using AR glasses according to any one of claims 1 to 7.
CN202111362864.4A 2021-11-17 2021-11-17 Method and device for simulating image by utilizing AR glasses, AR glasses and medium Pending CN114063778A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111362864.4A CN114063778A (en) 2021-11-17 2021-11-17 Method and device for simulating image by utilizing AR glasses, AR glasses and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111362864.4A CN114063778A (en) 2021-11-17 2021-11-17 Method and device for simulating image by utilizing AR glasses, AR glasses and medium

Publications (1)

Publication Number Publication Date
CN114063778A true CN114063778A (en) 2022-02-18

Family

ID=80273392

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111362864.4A Pending CN114063778A (en) 2021-11-17 2021-11-17 Method and device for simulating image by utilizing AR glasses, AR glasses and medium

Country Status (1)

Country Link
CN (1) CN114063778A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446032A (en) * 2010-09-30 2012-05-09 中国移动通信有限公司 Information input method and terminal based on camera
CN103440033A (en) * 2013-08-19 2013-12-11 中国科学院深圳先进技术研究院 Method and device for achieving man-machine interaction based on bare hand and monocular camera
CN104331191A (en) * 2013-07-22 2015-02-04 深圳富泰宏精密工业有限公司 System and method for realizing touch on basis of image recognition
CN104808800A (en) * 2015-05-21 2015-07-29 上海斐讯数据通信技术有限公司 Smart glasses device, mobile terminal and operation method of mobile terminal
CN106297216A (en) * 2016-11-02 2017-01-04 上海理工大学 A kind of alarm device preventing unhealthy eye and glasses
CN106997235A (en) * 2016-01-25 2017-08-01 亮风台(上海)信息科技有限公司 Method, equipment for realizing augmented reality interaction and displaying
CN108421252A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of game implementation method and AR equipment based on AR equipment
CN109934929A (en) * 2017-12-15 2019-06-25 深圳梦境视觉智能科技有限公司 The method, apparatus of image enhancement reality, augmented reality show equipment and terminal
CN110456911A (en) * 2019-08-09 2019-11-15 Oppo广东移动通信有限公司 Electronic equipment control method and device, electronic equipment and readable storage medium
CN110866940A (en) * 2019-11-05 2020-03-06 广东虚拟现实科技有限公司 Virtual picture control method and device, terminal equipment and storage medium
CN111479016A (en) * 2020-04-07 2020-07-31 Oppo广东移动通信有限公司 Terminal use duration reminding method and device, terminal and storage medium
CN112947825A (en) * 2021-01-28 2021-06-11 维沃移动通信有限公司 Display control method, display control device, electronic device, and medium
CN112968787A (en) * 2021-02-01 2021-06-15 杭州光粒科技有限公司 Conference control method, AR (augmented reality) equipment and conference system

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446032A (en) * 2010-09-30 2012-05-09 中国移动通信有限公司 Information input method and terminal based on camera
CN104331191A (en) * 2013-07-22 2015-02-04 深圳富泰宏精密工业有限公司 System and method for realizing touch on basis of image recognition
CN103440033A (en) * 2013-08-19 2013-12-11 中国科学院深圳先进技术研究院 Method and device for achieving man-machine interaction based on bare hand and monocular camera
CN104808800A (en) * 2015-05-21 2015-07-29 上海斐讯数据通信技术有限公司 Smart glasses device, mobile terminal and operation method of mobile terminal
CN106997235A (en) * 2016-01-25 2017-08-01 亮风台(上海)信息科技有限公司 Method, equipment for realizing augmented reality interaction and displaying
CN106297216A (en) * 2016-11-02 2017-01-04 上海理工大学 A kind of alarm device preventing unhealthy eye and glasses
CN108421252A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of game implementation method and AR equipment based on AR equipment
CN109934929A (en) * 2017-12-15 2019-06-25 深圳梦境视觉智能科技有限公司 The method, apparatus of image enhancement reality, augmented reality show equipment and terminal
CN110456911A (en) * 2019-08-09 2019-11-15 Oppo广东移动通信有限公司 Electronic equipment control method and device, electronic equipment and readable storage medium
CN110866940A (en) * 2019-11-05 2020-03-06 广东虚拟现实科技有限公司 Virtual picture control method and device, terminal equipment and storage medium
CN111479016A (en) * 2020-04-07 2020-07-31 Oppo广东移动通信有限公司 Terminal use duration reminding method and device, terminal and storage medium
CN112947825A (en) * 2021-01-28 2021-06-11 维沃移动通信有限公司 Display control method, display control device, electronic device, and medium
CN112968787A (en) * 2021-02-01 2021-06-15 杭州光粒科技有限公司 Conference control method, AR (augmented reality) equipment and conference system

Similar Documents

Publication Publication Date Title
US10001838B2 (en) Feature tracking for device input
US9160993B1 (en) Using projection for visual recognition
US9390340B2 (en) Image-based character recognition
TWI654539B (en) Virtual reality interaction method, device and system
US9377859B2 (en) Enhanced detection of circular engagement gesture
JP6013583B2 (en) Method for emphasizing effective interface elements
CN104350509B (en) Quick attitude detector
US9207852B1 (en) Input mechanisms for electronic devices
CN110232311A (en) Dividing method, device and the computer equipment of hand images
CN111399638B (en) Blind computer and intelligent mobile phone auxiliary control method suitable for blind computer
US20130155026A1 (en) New kind of multi-touch input device
CN102906671A (en) Gesture input device and gesture input method
CN109344793A (en) Aerial hand-written method, apparatus, equipment and computer readable storage medium for identification
CN107077285A (en) Operation device, the information processor with operation device and the operation acceptance method for information processor
CN111680686B (en) Signboard information identification method, device, terminal and storage medium
US9547420B1 (en) Spatial approaches to text suggestion
US20140232672A1 (en) Method and terminal for triggering application programs and application program functions
CN114063778A (en) Method and device for simulating image by utilizing AR glasses, AR glasses and medium
US20160062534A1 (en) Information Processing Method And Electronic Device
US20220050528A1 (en) Electronic device for simulating a mouse
CN110007748B (en) Terminal control method, processing device, storage medium and terminal
CN103558948A (en) Man-machine interaction method applied to virtual optical keyboard
Hegde et al. A fingertip gestural user interface without depth data for mixed reality applications
US10372297B2 (en) Image control method and device
WO2020170851A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination