CN109241900B - Wearable device control method and device, storage medium and wearable device - Google Patents

Wearable device control method and device, storage medium and wearable device Download PDF

Info

Publication number
CN109241900B
CN109241900B CN201811000739.7A CN201811000739A CN109241900B CN 109241900 B CN109241900 B CN 109241900B CN 201811000739 A CN201811000739 A CN 201811000739A CN 109241900 B CN109241900 B CN 109241900B
Authority
CN
China
Prior art keywords
target
guide view
wearable device
characters
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811000739.7A
Other languages
Chinese (zh)
Other versions
CN109241900A (en
Inventor
魏苏龙
林肇堃
麦绮兰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811000739.7A priority Critical patent/CN109241900B/en
Publication of CN109241900A publication Critical patent/CN109241900A/en
Application granted granted Critical
Publication of CN109241900B publication Critical patent/CN109241900B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • G06F40/58Use of machine translation, e.g. for multi-lingual retrieval, for server-side translation for client devices or for real-time translation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Abstract

The embodiment of the application discloses a control method and device of wearable equipment, a storage medium and the wearable equipment. The method comprises the following steps: acquiring a guide view of a target building; recognizing characters in the guide view, and converting the characters into target characters corresponding to the preset language type; and controlling the wearable device to display a target guide view containing the target characters. By adopting the technical scheme, the embodiment of the application can acquire the guide view of the building in the process of wearing the wearable device by the user, converts characters in the guide view into languages familiar to the user, helps the user to know the internal layout condition of the building, and can enrich the functions of the wearable device.

Description

Wearable device control method and device, storage medium and wearable device
Technical Field
The embodiment of the application relates to the technical field of intelligent equipment, in particular to a wearable equipment control method and device, a storage medium and a wearable equipment.
Background
At present, intelligent wearable equipment enters the daily life of vast users, and convenience in many aspects is provided for the life, work and the like of the users.
Along with the development of intelligence wearing technique, abundant various functions can be realized to present intelligent wearing equipment. However, the functions of the current intelligent wearable device are still not complete enough, and improvement is needed.
Disclosure of Invention
The embodiment of the application provides a control method and device of wearable equipment, a storage medium and the wearable equipment, and a control scheme of the wearable equipment can be optimized.
In a first aspect, an embodiment of the present application provides a method for controlling a wearable device, including:
acquiring a guide view of a target building;
recognizing characters in the guide view, and converting the characters into target characters corresponding to a preset language type;
and controlling the wearable device to display a target guide view containing the target characters.
In a second aspect, an embodiment of the present application provides a control device for a wearable device, including:
the guide view acquisition module is used for acquiring a guide view of a target building;
the character conversion module is used for identifying characters in the guide view and converting the characters into target characters corresponding to a preset language type;
and the guide view display module is used for controlling the wearable equipment to display the target guide view containing the target characters.
In a third aspect, embodiments of the present application provide a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a control method of a wearable device according to embodiments of the present application.
In a fourth aspect, an embodiment of the present application provides a wearable device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor executes the computer program to implement the control method of the wearable device according to the embodiment of the present application.
According to the control scheme of the wearable device, the guide view of the target building is obtained, the characters in the guide view are identified, the characters are converted into the target characters corresponding to the preset language type, and the wearable device is controlled to display the target guide view containing the target characters. By adopting the technical scheme, the guide view of the building can be acquired in the process that the user wears the wearable device, the characters in the guide view are converted into the language familiar to the user, the user is helped to know the internal layout condition of the building, and the function of the wearable device can be enriched.
Drawings
Fig. 1 is a schematic flowchart of a control method of a wearable device according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another control method for a wearable device according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of a control method of another wearable device according to an embodiment of the present disclosure;
fig. 4 is a block diagram of a control device of a wearable device according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a wearable device according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of another wearable device provided in the embodiment of the present application;
fig. 7 is a schematic entity diagram of a wearable device provided in an embodiment of the present application.
Detailed Description
The technical scheme of the application is further explained by the specific implementation mode in combination with the attached drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and are not limiting of the application. It should be further noted that, for the convenience of description, only some of the structures related to the present application are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
Fig. 1 is a flowchart of a control method of a wearable device according to an embodiment of the present disclosure, where the method may be applied to a wearable device and may be executed by a control apparatus of the wearable device, where the apparatus may be implemented by software and/or hardware, and may be generally integrated in the wearable device. As shown in fig. 1, the method includes:
step 101, obtaining a guide view of a building.
In the embodiment of the application, the specific composition structure, shape, volume and other attributes of the wearable device are not limited. Wearable devices may include wearable devices worn on the head of a user, such as smart glasses, smart helmets, and the like. Illustratively, taking smart glasses as an example, the smart glasses include a glasses frame and a lens. The eyeglass frame body comprises eyeglass legs and an eyeglass frame. Optionally, the inner sides of the glasses legs can be provided with breathing lamps, the breathing lamps can be LED lamps, and the breathing lamps can flash according to the heartbeat frequency of the intelligent glasses wearer. The temple is further provided with a touch area (such as a touch panel) and a bone conduction area. The touch control area is arranged on the outer side of the glasses legs, and a touch detection module is arranged in the touch control area and used for detecting touch operation of a user. For example, a touch sensor module is used to detect a touch operation by a user, and the touch sensor module is at a low level in an initial state and at a high level when there is a touch operation. In a scene that a user wears smart glasses, the side of the temple close to the face is defined as the inner side, and the side opposite to the inner side and far from the face is defined as the outer side. Bone conduction regions are provided on the temples near the ears. Wherein, a bone conduction component such as a bone conduction earphone or a bone conduction sensor is arranged in the bone conduction area. Set up heart rate detection module (like heart rate sensor) in the position that the mirror leg is close to face temple for acquire the heart rate information of wearing intelligent glasses user. Set up intelligent microphone on the picture frame, can the current ambient noise size of locating of intelligent recognition, can be based on the performance of ambient noise automatically regulated microphone. The mirror frame is also provided with a distance sensor, a gyroscope and the like. In addition, the glasses frame and the nose support are also provided with an Electrooculogram (EOG) sensor for acquiring the eye state of the user. In addition, still be provided with the micro-processing district on the mirror leg, microprocessor sets up in the micro-processing district, is connected with devices such as above-mentioned touch detection module, bone conduction earphone, heart rate sensor, intelligent microphone, distance sensor, gyroscope, electrooculogram sensor electricity respectively for receive the data of treating, carry out data operation, data processing and output control command to corresponding device. It should be noted that the smart glasses may download the multimedia resource from the cloud for playing through the internet, and may also acquire the multimedia resource from the terminal device by establishing a communication connection with the terminal device, which is not limited in this application. The outer side of the spectacle frame can be also provided with a first camera which is used for shooting pictures of scenes in front of the wearer and the like.
In the embodiment of the present application, the buildings may include various types of buildings such as shopping malls, office buildings, teaching buildings, museums, libraries, open scenic spots (such as zoos, amusement parks, forest parks, and the like), and scenic spot buildings (such as ancient towers, palace, television towers, and the like), which can be accessed by users. The target building may be one of the above buildings, and for example, the target building may be determined according to a selection of the user, or the building where the user is located may be determined according to current positioning information and the like, and the target building may be determined. The guide view can comprise information related to the internal layout situation of the building, such as floor information, a room distribution diagram, a room name or a room number, article information in a room, distribution information of markers (such as service tables, escalators, vertical ladders, monuments and the like), and the like, and can be determined according to the specific situation of the building. The guide view may be a plan view or a three-dimensional perspective view, and the embodiment of the present application is not limited.
For example, the guide view of the building may be stored in the built-in memory of the wearable device, or may be stored in the external terminal device, where the external terminal device may be a terminal device associated with the wearable device in advance, such as a server or a mobile terminal. Accordingly, the wearable device can acquire the guide view of the building from the built-in memory and also can acquire the guide view of the building from the external terminal. Optionally, the guidance view of the common building can be prestored in the wearable device built-in memory, after the target building is determined, the wearable device firstly searches whether the guidance view corresponding to the target building exists in the built-in memory, if yes, the wearable device can directly read the guidance view, the speed of obtaining the guidance view can be effectively increased, if not, the wearable device can be obtained from the preset terminal device, and the guidance view corresponding to various buildings can be more comprehensively obtained while the built-in storage space of the wearable device is saved.
And 102, identifying characters in the guide view, and converting the characters into target characters corresponding to a preset language type.
For example, the preset language type may be a default language type of the system, or may be a language type preset by the user. The language types may include languages such as chinese, english, japanese, korean, and french, among others. The default language type of the system may be determined according to an official language of a country to which the locations of the wearable device, such as the factory location or the purchase location, belong, for example, if the user purchases the wearable device from china, the default language type of the system may be chinese, and if the user purchases the wearable device from the united states, the default language type of the system may be english. It will be appreciated that the user may not purchase the wearable device in his country, and the user may set his familiar language to the preset language type, for example, if a chinese citizen purchases the wearable device in the united states, the user may set the preset language type to chinese.
Optionally, recognizing the text in the guide view, and converting the text into a target text corresponding to a preset language type, including: identifying characters in the guide view, and determining a language type corresponding to the characters; and when the language type is judged not to be matched with the preset language type, converting the characters into target characters corresponding to the preset language type. The advantage of this arrangement is that if the characters in the acquired guide view are already matched with the preset language type, no character conversion operation is required, and unnecessary operations are avoided.
The process of character recognition in the embodiment of the present application is not specifically limited, and a character recognition means in the related art may be adopted, for example, the method may generally include the steps of preprocessing, graying, noise reduction, binarization, character segmentation, normalization, and the like. After the characters in the guide view are recognized, the language type of the characters is judged, and if the characters are not the preset language type, the characters in the guide view are converted into target characters corresponding to the preset language type. For example, the predetermined language type is chinese, and when the text in the guide view is korean, the korean text in the guide view can be converted into chinese text.
And 103, controlling the wearable device to display a target guide view containing the target characters.
In the embodiment of the application, after the characters in the guide view are converted into the target characters corresponding to the preset language type, the target characters can be added into the original guide view, and the characters in the original guide view are reserved to obtain the target guide view; the characters in the original guide view can be replaced by the target characters to obtain the target guide view.
In the embodiment of the application, the wearable device has an image display function, and the imaging principle of the wearable device is not limited in the embodiment of the application. For example, the imaging can be performed by a micro projector, for example, by using the principle of optical reflection projection, the micro projector projects light onto a reflective screen, and then refracts the light to the eyeball of a human body through a convex lens to realize the first-order amplification, so that a virtual screen large enough to display text information, images and the like is formed in front of the eye. For example, the projection may be performed by using a low-power laser, and the laser displays an image of a certain pixel on the glasses lens, and the image is reflected to the retina of the user, so as to realize the image display. Of course, other imaging modes are possible, and the embodiments of the present application are not described one by one.
After the wearable device is controlled to display the target guide view containing the target characters, a user can view the guide view of the target building according with the language habit of the user through the wearable device, the interested place of the user can be conveniently found, and the user can arrive at the place according to the schematic information in the guide view. Optionally, information such as the current position and the direction of the user can be marked in the target guide view, so that the user is further helped to quickly read the target guide view, and the user can travel to the destination by more efficiently utilizing the target guide view.
The control method of the wearable device, provided in the embodiment of the application, includes the steps of obtaining a guide view of a target building, identifying characters in the guide view, converting the characters into target characters corresponding to a preset language type, and controlling the wearable device to display a target guide view containing the target characters. By adopting the technical scheme, the guide view of the building can be acquired in the process that the user wears the wearable device, the characters in the guide view are converted into the language familiar to the user, the user is helped to know the internal layout condition of the building, and the function of the wearable device can be enriched.
In some embodiments, after controlling the wearable device to display the target guide view containing the target text, the method further comprises: detecting a first operation of a user; determining a target position according to the first operation; and planning a route from the current position to the target position, and displaying route information on the target guide view. The advantage of this arrangement is that it provides a reasonable route for the user to travel, helping the user to quickly reach their own central destination. Wearable equipment can help the user to realize various functions as intelligent equipment, and the user can control wearable equipment by using human-computer interaction. In the embodiment of the present application, a specific manner of receiving an operation by a wearable device is not limited, and the first operation may be any operation in any form for controlling the wearable device. For example, a physical key or a virtual key (e.g., a touch key) may be provided on the wearable device, and a user may press or touch the key on the wearable device by specifying a trigger manner (e.g., clicking, long-pressing, or multiple continuous clicks, etc.) to express his/her operation intention. Exemplarily, a voice recognition module can be further arranged on the wearable device, semantic analysis is performed on the natural language spoken by the user by using the voice recognition module to obtain corresponding voice content, and the wearable device is controlled to respond to the voice command of the user according to the voice content. For example, a sensor (such as an ultrasonic sensor) for sensing a user action, such as a gesture action of the user, may be further disposed on the wearable device, and the sensor is used to recognize a motion of the user for expressing an operation intention, and control the wearable device to perform a corresponding response according to a type of the motion.
Optionally, the operation type corresponding to the first operation includes a gaze, a voice, or a gesture.
By taking fixation as an example, the wearable device may be used to detect an object focused by the eyes of the user, and the specific detection mode may be various, which is not limited in the embodiment of the present application. For example, a second camera may be disposed inside a frame of the wearable device, and the second camera may track the movement of the eyes of the user, obtain the movement of the eyes of the user, analyze the direction of the eyes of the user, and determine an object gazed by the user, where the object may represent a target location that the user wishes to reach. The second camera can be opened in real time and also can be opened when the opening condition is met. Optionally, determining the target position according to the first operation may include: and acquiring a target object watched by a user in the target guide view, and determining a position corresponding to the target object as a target position. Further, determining the position corresponding to the target object as a target position includes: when the time length of the target object watched by the user reaches a preset time length threshold value or when the frequency of the target object watched by the user reaches a preset frequency threshold value, determining the position corresponding to the target object as a target position. The method has the advantages that the place which is really concerned by the user is more accurately determined, and the place which is not concerned by the user is prevented from being determined as the target position by mistake. It can be understood that when the user gazes at a certain object for a long time or gazes at a certain object for multiple times, it may be stated that the user pays attention to the object, and therefore, when the duration that the target object is gazed by the user reaches the preset duration threshold or the number of gazed times reaches the preset number threshold, the position corresponding to the target object is considered to be the place that the user really wants to go. The specific value of the preset duration threshold is not limited, and may be, for example, 5 seconds; the specific value of the preset number threshold is not limited, and may be 3 times, for example. The human eye is approximately spherical and is located within the orbit. The eyeball comprises tissues such as the wall of the eyeball, the inner cavity and the content of the eye, nerves, blood vessels and the like. The eyeball wall is mainly divided into an outer layer, a middle layer and an inner layer. The outer layer of the eye wall is composed of the cornea and sclera. The anterior 1/6 layer of the eye wall is the clear cornea, commonly known as the "eyeball", and the remaining 5/6 is the white sclera, commonly known as the "white eye". In the embodiment of the application, the second camera described above may be used to acquire an eye image of the user, and the relative position of the eyeball in the eye socket is calculated according to the eye image, so as to calculate the gazing direction of the eye of the user, and map the gazing direction to the target navigation map, thereby determining the target object gazed by the user in the target navigation map.
Taking voice as an example, when a user looks up a target navigation map through the wearable device, the user can see characters familiar to the user, when the user finds a place to which the user wants to reach, the user can directly send a voice instruction in a natural language mode, such as 'i want to go to XX shop', at this time, the wearable device can analyze the voice of the user by using a voice recognition module, and extract a target position appearing in the target navigation map.
Taking the gesture as an example, the wearable device generally presents an enlarged virtual image, the user can see his/her hand through the lens of the wearable device, observe the relative position relationship between the hand and each position in the target navigation map, and select a place where the user wants to reach by using gestures such as circle drawing, clicking, double clicking or long-time parking (for example, the hand keeps the same gesture for a preset duration), the wearable device can acquire and recognize the gesture of the user by using the first camera, and then determine a place corresponding to the gesture meeting the condition, and determine the place as the target position.
After the target position is determined, the current position of the user can be obtained, when the user position is displayed in the target navigation map, the user position does not need to be obtained again, and when the user position is not displayed in the target navigation map, the positioning module in the wearable device can be used for positioning, so that the current position is determined. The wearable device can calculate a feasible route for the current position to travel to the target position, and recommend the feasible route to the user, wherein the shortest route in the feasible route can be generally recommended to help the user to quickly reach the target position. The route information displayed on the target guide view may be embodied in a text form, or may be embodied in a line, an arrow, or the like, and the embodiment of the present application is not limited.
In some embodiments, after controlling the wearable device to display the target guide view containing the target text, the method further comprises: and performing corresponding display adjustment operation on the target guide view according to the gesture operation of the user. The method has the advantages that the display mode of the target guide view can be flexibly adjusted, and a user can conveniently view the target guide view better. The display adjustment operation may include, for example, a zoom-in operation, a zoom-out operation, a move operation, and the like. For example, when the target guide view contains more information, the user may not easily find the information of interest, and then the target guide view can be enlarged by using gesture operation, so that the details of the target guide view can be seen more clearly, and the enlarged target guide view can be moved by using gesture operation, so that the area of interest is moved to the front side, and the user can conveniently view the area of interest.
In some embodiments, the obtaining the guide view of the target building includes: acquiring current positioning information; determining a target building according to the positioning information; and acquiring a guide view of the target building. The advantage of this arrangement is that the building in which the user is located can be determined automatically by means of location without manual setting by the user. Optionally, a Positioning module, such as a Global Positioning System (GPS) module, may be built in the wearable device, and the target building where the user is located is determined by using the Positioning module.
In some embodiments, the obtaining the guide view of the target building includes: acquiring the gravity center change information of a user; determining a corresponding target floor sequence number in a target building according to the gravity center change information; and acquiring a guide view corresponding to the target floor sequence number. The advantage that sets up like this lies in, probably has a plurality of floors in the target building, can select the floor that the user is currently located automatically to obtain the guide view that current floor corresponds and be used for subsequent demonstration, avoid the guide view of other irrelevant floors to the interference of current floor guide view, cause the waste in the visual region of wearable equipment. Taking a multi-layer market as an example, the position of the user in the horizontal direction can only be generally obtained by utilizing the positioning information, and the satellite positioning modes such as a GPS and the like can not be generally used in the building. Wherein, motion sensor such as the integrated acceleration sensor in the usable wearing formula equipment of focus change information obtains, and this application embodiment does not limit.
In some embodiments, after controlling the wearable device to display the target guide view containing the target text, the method further comprises: detecting a second operation of the user; determining the sequence number of the floors to be switched according to the second operation; and acquiring a guide view corresponding to the sequence number of the floor to be switched, and repeatedly performing related operations of identifying characters in the guide view. The advantage of this arrangement is that it is convenient for the user to switch between the guide pictures on different floors, and to find the desired place in the whole building. Optionally, the operation type corresponding to the second operation may be the same as or different from that of the first operation, and the specific implementation manner may refer to the description related to the first operation, which is not described herein again. Illustratively, the identification corresponding to each floor sequence number can be displayed through the wearable device, when a user watches the floor sequence number to be switched, speaks the floor sequence number to be switched or selects the floor sequence number to be switched by gestures, a guide view corresponding to the floor sequence number to be switched is obtained, relevant operations for identifying characters in the guide view are repeated, for example, the characters in the guide view corresponding to the floor sequence number to be switched are identified, the characters are converted into target characters corresponding to the preset language type, and the wearable device is controlled to display a target guide view corresponding to the floor sequence number to be switched and containing the target characters.
Fig. 2 is a schematic flowchart of another control method for a wearable device according to an embodiment of the present application, where the method includes the following steps:
step 201, starting a guide function of the wearable device.
For example, when a user wears the wearable device to arrive at a certain building, the guide function of the wearable device can be actively started, such as touching a touch area outside the temple representing the guide function, or pressing a physical button on the frame representing the guide function, and the like.
Step 202, acquiring current positioning information by using a positioning module built in the wearable device, and determining a target building according to the positioning information.
For example, the wearable device may automatically determine a target building where the user is currently located according to the positioning information; the positioning information can also be sent to a preset server, and the preset server determines the target building according to the positioning information.
And step 203, acquiring a guide view of the target building.
For example, the positioning information or the determined target building may be sent to a preset server, and the preset server searches for a guide view corresponding to the target building and feeds back the guide view to the wearable device.
And 204, identifying characters in the guide view, and determining the language type corresponding to the characters.
Step 205, judging whether the language type is a preset language type, if so, executing step 207; otherwise, step 206 is performed.
Step 206, converting the characters in the guide view into target characters corresponding to the preset language type.
And step 207, controlling the wearable device to display a target guide view containing the target characters.
It is understood that, after step 205 is executed, if step 207 is executed directly, the original text in the guide view may be understood as the target text in this step, because the language types of the two are identical, and accordingly, the original guide view may also be understood as the target guide view in this step.
And step 208, acquiring a target object watched by the user in the target guide view, and determining a position corresponding to the target object as a target position.
Optionally, when the duration of the target object watched by the user reaches a preset duration threshold, the position corresponding to the target object is determined as the target position, and if the duration does not reach the preset duration threshold, the target object is determined again.
And 209, planning a route from the current position to the target position, and displaying the route information on the target guide view.
According to the control method of the wearable device, after the guide function of the wearable device is started, the wearable device can automatically position a building where a user is located, obtain a guide picture corresponding to the building, convert characters in the guide picture into characters of a preset language type and display the characters through the wearable device, determine a target position according to the watching operation of the user, intelligently recommend route information to the user, and help the user to quickly reach a place to be reached.
Fig. 3 is a flowchart of a control method of another wearable device according to an embodiment of the present application, which is applicable to an application scenario of a multi-story building, and the method includes:
step 301, starting a guide function of the wearable device.
And 302, acquiring current positioning information by using a positioning module built in the wearable device, and determining a target building according to the positioning information.
And 303, acquiring the gravity center change information of the user when the target building is determined to be a multi-storey building.
For example, a multi-story building may include a building having multiple floors, such as a mall, a library, and a teaching building.
And step 304, determining a corresponding target floor sequence number in the target building according to the gravity center change information, and acquiring a first guide view corresponding to the target floor sequence number.
Step 305, identifying a first word in the first guide view, and converting the first word into a first target word corresponding to a preset language type.
Step 306, controlling the wearable device to display a first target guide view containing first target characters.
And 307, performing corresponding display adjustment operation on the first target guide view according to the detected first gesture operation.
And 308, determining a target position in the first target guide view according to the detected second gesture operation, and planning a route from the current position to the target position.
Step 309, displaying the route information in the first target guide view.
And 310, determining the sequence number of the floor to be switched according to the detected user voice, acquiring a second guide view corresponding to the sequence number of the floor to be switched, and converting second characters in the second guide view into second target characters corresponding to the preset language type.
For example, the user may issue a voice command in a natural language manner, such as "switch to the third floor", and the wearable device may recognize that the number of the floor to be switched is three.
And 311, controlling the wearable device to display a second target guide view containing second target characters.
It is understood that, next, operations similar to those of the first target guide view may be performed on the second target guide view, as in steps 307 to 309, which are not described herein again.
According to the control method of the wearable device, after the guide function of the wearable device is started, the wearable device can automatically position a building where a user is located, the floor where the user is located can be automatically detected by means of gravity center change, the guide view of the corresponding floor is obtained, characters in the guide view are converted into characters of a preset language type and displayed through the wearable device, the guide view is controlled to be displayed and a target position is determined according to gesture operation of the user, route information is intelligently recommended to the user, the user is helped to quickly reach a place where the user wants to reach, and the intelligent degree of the wearable device is effectively improved.
Fig. 4 is a block diagram of a control apparatus of a wearable device according to an embodiment of the present disclosure, where the apparatus may be implemented by software and/or hardware, and is generally integrated in a wearable device, and the wearable device may be controlled by executing a control method of the wearable device. As shown in fig. 4, the apparatus includes:
a guide view acquiring module 401, configured to acquire a guide view of a target building;
a character conversion module 402, configured to identify characters in the guide view, and convert the characters into target characters corresponding to a preset language type;
and a guide view display module 403, configured to control the wearable device to display a target guide view containing the target text.
The control device of the wearable device provided in the embodiment of the application acquires the guide view of the target building, identifies the characters in the guide view, converts the characters into the target characters corresponding to the preset language type, and controls the wearable device to display the target guide view containing the target characters. By adopting the technical scheme, the guide view of the building can be acquired in the process that the user wears the wearable device, the characters in the guide view are converted into the language familiar to the user, the user is helped to know the internal layout condition of the building, and the function of the wearable device can be enriched.
Optionally, recognizing the text in the guide view, and converting the text into a target text corresponding to a preset language type, including:
identifying characters in the guide view, and determining a language type corresponding to the characters;
and when the language type is judged not to be matched with the preset language type, converting the characters into target characters corresponding to the preset language type.
Optionally, the apparatus further comprises:
the first operation detection module is used for detecting a first operation of a user after the wearable device is controlled to display a target guide view containing the target characters;
the target position determining module is used for determining a target position according to the first operation;
and the route planning module is used for planning a route from the current position to the target position and displaying the route information on the target guide view.
Optionally, the operation type corresponding to the first operation includes a gaze, a voice, or a gesture.
Optionally, the obtaining of the guide view of the target building includes:
acquiring current positioning information;
determining a target building according to the positioning information;
and acquiring a guide view of the target building.
Optionally, the obtaining of the guide view of the target building includes:
acquiring the gravity center change information of a user;
determining a corresponding target floor sequence number in a target building according to the gravity center change information;
and acquiring a guide view corresponding to the target floor sequence number.
Optionally, the apparatus further comprises: after controlling the wearable device to display the target guide view containing the target text, the method further comprises:
the second operation detection module is used for detecting a second operation of the user;
the to-be-switched serial number determining module is used for determining the serial number of the floor to be switched according to the second operation;
the guide image acquisition module is further configured to: and acquiring a guide view corresponding to the sequence number of the floor to be switched, and indicating other related modules to repeatedly perform related operations for identifying characters in the guide view.
Embodiments of the present application also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform a method of controlling a wearable device, the method including:
acquiring a guide view of a target building;
recognizing characters in the guide view, and converting the characters into target characters corresponding to a preset language type;
and controlling the wearable device to display a target guide view containing the target characters.
Storage medium-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk, or tape devices; computer system memory or random access memory such as DRAM, DDRRAM, SRAM, EDORAM, Lanbas (Rambus) RAM, etc.; non-volatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a different second computer system connected to the first computer system through a network (such as the internet). The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media that may reside in different locations, such as in different computer systems that are connected by a network. The storage medium may store program instructions (e.g., embodied as a computer program) that are executable by one or more processors.
Of course, the storage medium provided in the embodiments of the present application contains computer-executable instructions, and the computer-executable instructions are not limited to the control operation of the wearable device described above, and may also execute related operations in the control method of the wearable device provided in any embodiment of the present application.
The embodiment of the application provides a wearable device, and the wearable device control device provided by the embodiment of the application can be integrated in the wearable device. Fig. 5 is a schematic structural diagram of a wearable device according to an embodiment of the present application. The wearable device 500 may include: the wearable device comprises a memory 501, a processor 502 and a computer program stored on the memory 501 and executable by the processor 502, wherein the processor 502 executes the computer program to realize the control method of the wearable device according to the embodiment of the application.
The wearing formula equipment that this application embodiment provided can wear the in-process of wearing formula equipment at the user, confirms the object that the user was concerned about, acquires the relevant information of this object and adopts augmented reality's mode to show through wearing formula equipment, can enrich the function of wearing formula equipment, strengthens wearing formula equipment's science and technology sense.
The memory and the microprocessor listed in the above examples are all part of components of the wearable device, and the wearable device may further include other components. Fig. 6 is a block diagram of a wearable device according to an embodiment of the present disclosure, and fig. 7 is a schematic entity diagram of a wearable device according to an embodiment of the present disclosure. As shown in fig. 6 and 7, the wearable device may include: the device comprises a memory 601, a processor (CPU) 602 (hereinafter, referred to as CPU), a display Unit 603, a touch panel 604, a heart rate detection module 605, a distance sensor 606, a camera 607, a bone conduction speaker 608, a microphone 609 and a breathing lamp 610. These components communicate over one or more communication buses or signal lines 611 (hereinafter also referred to as internal transmission lines).
It should be understood that the illustrated wearable device is merely one example, and that the wearable device may have more or fewer components than shown in the figures, may combine two or more components, or may have a different configuration of components. The various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
The wearable device provided in the present embodiment, which takes smart glasses as an example, is described in detail below.
A memory 601, the memory 601 being accessible by the CPU602, the memory 601 may include high speed random access memory, and may also include non-volatile memory, such as one or more magnetic disk storage devices, flash memory devices, or other volatile solid state storage devices.
The display component 603 can be used for displaying image data and a control interface of an operating system, the display component 603 is embedded in a frame of the smart glasses, an internal transmission line 611 is arranged inside the frame, and the internal transmission line 611 is connected with the display component 603.
And a touch panel 604, the touch panel 604 being disposed at an outer side of at least one smart glasses temple for acquiring touch data, the touch panel 604 being connected to the CPU602 through an internal transmission line 611. The touch panel 604 can detect finger sliding and clicking operations of the user, and accordingly transmit the detected data to the processor 602 for processing to generate corresponding control commands, which may be, for example, a left shift command, a right shift command, an up shift command, a down shift command, and the like. For example, the display component 603 can display the virtual image data transmitted by the processor 602, and the virtual image data can be correspondingly changed according to the user operation detected by the touch panel 604, specifically, the screen switching can be performed, and when a left shift instruction or a right shift instruction is detected, the previous or next virtual image screen is correspondingly switched; when the display section 603 displays video play information, the left shift instruction may be to perform playback of the play content, and the right shift instruction may be to perform fast forward of the play content; when the display part 603 displays editable text content, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may be displacement operations on a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display component 603 is a game moving picture, the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction may be for controlling an object in a game, for example, in an airplane game, the flying direction of the airplane may be controlled by the left shift instruction, the right shift instruction, the upward shift instruction, and the downward shift instruction, respectively; when the display part 603 can display video pictures of different channels, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction can perform switching of different channels, wherein the up shift instruction and the down shift instruction can be switching to a preset channel (such as a common channel used by a user); when the display section 603 displays a still picture, the left shift instruction, the right shift instruction, the up shift instruction, and the down shift instruction may switch between different pictures, where the left shift instruction may be to a previous picture, the right shift instruction may be to a next picture, the up shift instruction may be to a previous picture set, and the down shift instruction may be to a next picture set. The touch panel 604 can also be used to control display switches of the display portion 603, for example, when the touch area of the touch panel 604 is pressed for a long time, the display portion 603 is powered on to display an image interface, when the touch area of the touch panel 604 is pressed for a long time again, the display portion 603 is powered off, and when the display portion 603 is powered on, the brightness or resolution of an image displayed in the display portion 603 can be adjusted by performing a slide-up and slide-down operation on the touch panel 604.
Heart rate detection module 605 for survey user's heart rate data, the heart rate indicates the heartbeat number of minute, and this heart rate detection module 605 sets up at the mirror leg inboard. Specifically, the heart rate detection module 605 may obtain the human body electrocardiographic data by using the dry electrode in an electric pulse measurement manner, and determine the heart rate according to the amplitude peak value in the electrocardiographic data; this rhythm of heart detection module 605 can also be by adopting the light transmission and the light receiver component of photoelectric method measurement rhythm of heart, corresponding, and this rhythm of heart detection module 605 sets up in the mirror leg bottom, the earlobe department of human auricle. Heart rate detection module 605 can be corresponding after gathering heart rate data send to processor 602 and carry out data processing and have obtained the current heart rate value of wearer, in an embodiment, processor 602 can show this heart rate value in display component 603 in real time after determining the heart rate value of user, optional processor 602 can be corresponding trigger the alarm when determining that heart rate value is lower (for example less than 50) or higher (for example more than 100), send this heart rate value and/or the alarm information that generates to the server through communication module simultaneously.
The distance sensor 606 may be disposed on the frame, the distance sensor 606 is used for sensing a distance from a human face to the frame, and the distance sensor 606 may be implemented by using an infrared sensing principle. Specifically, the distance sensor 606 transmits the acquired distance data to the processor 602, and the processor 602 controls the brightness of the display part 603 according to the distance data. Illustratively, the processor 602 controls the display 603 to be in an on state when it determines that the distance detected by the distance sensor 606 is less than 5 cm, and controls the display 604 to be in an off state when it determines that the distance sensor detects an object approaching.
The breathing lamp 610 may be disposed at an edge of the frame, and when the display component 603 turns off the display screen, the breathing lamp 610 may be turned on to generate a gradually changing light-dark effect according to the control of the processor 602.
The camera 607 may be a front camera module disposed at the upper frame of the frame for collecting image data in front of the user, a rear camera module for collecting eyeball information of the user, or a combination thereof. Specifically, when the camera 607 collects the front image, the collected image is sent to the processor 602 for recognition and processing, and a corresponding trigger event is triggered according to the recognition result. Illustratively, when a user wears the wearable device at home, by identifying the collected front image, if a furniture item is identified, correspondingly inquiring whether a corresponding control event exists, if so, correspondingly displaying a control interface corresponding to the control event in the display part 603, and controlling the corresponding furniture item through the touch panel 604 by the user, wherein the furniture item and the smart glasses are in network connection through bluetooth or wireless ad hoc network; when a user wears the wearable device outdoors, a target recognition mode can be started correspondingly, the target recognition mode can be used for recognizing specific people, the camera 607 sends collected images to the processor 602 for face recognition processing, if preset faces are recognized, sound broadcasting can be performed through a speaker integrated with the smart glasses correspondingly, the target recognition mode can also be used for recognizing different plants, for example, the processor 602 records current images collected by the camera 607 according to touch operation of the touch panel 604 and sends the current images to the server for recognition through the communication module, the server recognizes the plants in the collected images and feeds back related plant names to the smart glasses, and feedback data are displayed in the display component 603. The camera 607 may also be configured to capture an image of an eye of a user, such as an eyeball, and generate different control instructions by recognizing rotation of the eyeball, for example, the eyeball rotates upward to generate an upward movement control instruction, the eyeball rotates downward to generate a downward movement control instruction, the eyeball rotates leftward to generate a leftward movement control instruction, and the eyeball rotates rightward to generate a rightward movement control instruction, where the display component 603 may display virtual image data transmitted by the processor 602, where the virtual image data may be changed according to a control instruction generated according to a change in movement of the eyeball of the user detected by the camera 607, and specifically, may perform frame switching, and when a leftward movement control instruction or a rightward movement control instruction is detected, switch to a previous or next virtual image frame; when the display part 603 displays video playing information, the left control instruction may be to perform playback of the playing content, and the right control instruction may be to perform fast forward of the playing content; when the display part 603 displays editable text content, the left movement control instruction, the right movement control instruction, the upward movement control instruction, and the downward movement control instruction may be displacement operations of a cursor, that is, the position of the cursor may be moved according to a touch operation of a user on the touch pad; when the content displayed by the display component 603 is a game animation picture, the left movement control command, the right movement control command, the upward movement control command and the downward movement control command may be used to control an object in a game, for example, in an airplane game, the flying direction of an airplane may be controlled by the left movement control command, the right movement control command, the upward movement control command and the downward movement control command respectively; when the display part 603 can display video pictures of different channels, the left shift control instruction, the right shift control instruction, the up shift control instruction, and the down shift control instruction can switch different channels, wherein the up shift control instruction and the down shift control instruction can be switching to a preset channel (such as a common channel used by a user); when the display part 603 displays a still picture, the left shift control command, the right shift control command, the up shift control command, and the down shift control command may switch between different pictures, where the left shift control command may be to a previous picture, the right shift control command may be to a next picture, the up shift control command may be to a previous picture set, and the down shift control command may be to a next picture set.
And a bone conduction speaker 608, the bone conduction speaker 608 being provided on an inner wall side of at least one temple, for converting the received audio signal transmitted from the processor 602 into a vibration signal. The bone conduction speaker 608 transmits sound to the inner ear of the human body through the skull, converts an electrical signal of the audio frequency into a vibration signal, transmits the vibration signal into a cochlea of the skull, and is sensed by auditory nerves. Through bone conduction speaker 608 as the sound generating mechanism has reduced hardware structure thickness, and weight is lighter, and electromagnetic radiation does not also can not receive electromagnetic radiation's influence simultaneously to possess noise immunity, waterproof and the advantage of liberating ears.
A microphone 609, which may be located on the lower rim of the frame, is used to capture external (user, ambient) sounds and transmit them to the processor 602 for processing. Illustratively, the microphone 609 collects the sound emitted by the user and performs voiceprint recognition through the processor 602, and if the sound is recognized as a voiceprint for authenticating the user, the subsequent voice control can be correspondingly received, specifically, the user can emit voice, the microphone 609 sends the collected voice to the processor 602 for recognition so as to generate a corresponding control instruction according to the recognition result, such as "power on", "power off", "display brightness increase", "display brightness decrease", and the processor 602 executes a corresponding control process according to the generated control instruction subsequently.
The control device, the storage medium and the wearable device of the wearable device provided in the above embodiments may execute the control method of the wearable device provided in any embodiment of the present application, and have corresponding functional modules and beneficial effects for executing the method. Technical details that are not described in detail in the above embodiments may be referred to a control method of a wearable device provided in any embodiment of the present application.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present application and the technical principles employed. It will be understood by those skilled in the art that the present application is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the application. Therefore, although the present application has been described in more detail with reference to the above embodiments, the present application is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the appended claims.

Claims (10)

1. A control method of wearable equipment is applied to the wearable equipment, a guide view of a target building is built in the wearable equipment, and the control method comprises the following steps:
acquiring a guide view of the target building, wherein the guide view comprises floor information, room layout, signs and other internal layout condition information of the target building;
recognizing characters in the guide view, and converting the characters into target characters corresponding to a preset language type;
controlling the wearable device to display a target guide view containing the target characters;
wherein, the obtaining of the guide view of the target building comprises:
acquiring the gravity center change information of a user;
determining a corresponding target floor sequence number in a target building according to the gravity center change information;
and acquiring a guide view corresponding to the target floor sequence number.
2. The method of claim 1, wherein identifying the text in the guide view and converting the text into a target text corresponding to a preset language type comprises:
identifying characters in the guide view, and determining a language type corresponding to the characters;
and when the language type is judged not to be matched with the preset language type, converting the characters into target characters corresponding to the preset language type.
3. The method of claim 1, after controlling the wearable device to display the target guide view containing the target text, further comprising:
receiving a first operation of a user;
determining a target position according to the first operation;
and planning a route from the current position to the target position, and displaying route information on the target guide view.
4. The method of claim 3, wherein the type of operation to which the first operation corresponds comprises gaze, voice, or gesture.
5. The method of claim 1, wherein the obtaining the guide view of the target building further comprises:
acquiring current positioning information;
determining a target building according to the positioning information;
and acquiring a guide view of the target building.
6. The method of claim 1, after controlling the wearable device to display the target guide view containing the target text, further comprising:
receiving a second operation of the user;
determining the sequence number of the floors to be switched according to the second operation;
and acquiring a guide view corresponding to the sequence number of the floor to be switched, and repeatedly performing related operations of identifying characters in the guide view.
7. The method of claim 1, wherein the wearable device comprises smart glasses.
8. A control device of wearable equipment, characterized in that, the control device of wearable equipment is integrated in wearable equipment, wearable equipment embeds there is the guide's view of target building, the control device of wearable equipment includes:
the guide view acquisition module is used for acquiring the gravity center change information of the user; determining the corresponding target floor serial number in the target building according to the gravity center change information; acquiring a guide view corresponding to the target floor serial number, wherein the guide view comprises floor information, room layout, signs and other internal layout condition information of the target building;
the character conversion module is used for identifying characters in the guide view and converting the characters into target characters corresponding to a preset language type;
and the guide view display module is used for controlling the wearable equipment to display the target guide view containing the target characters.
9. A computer-readable storage medium on which a computer program is stored, the program, when executed by a processor, implementing a method of controlling a wearable device according to any one of claims 1-7.
10. Wearable device, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of controlling a wearable device according to any one of claims 1 to 7 when executing the computer program.
CN201811000739.7A 2018-08-30 2018-08-30 Wearable device control method and device, storage medium and wearable device Active CN109241900B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811000739.7A CN109241900B (en) 2018-08-30 2018-08-30 Wearable device control method and device, storage medium and wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811000739.7A CN109241900B (en) 2018-08-30 2018-08-30 Wearable device control method and device, storage medium and wearable device

Publications (2)

Publication Number Publication Date
CN109241900A CN109241900A (en) 2019-01-18
CN109241900B true CN109241900B (en) 2021-04-09

Family

ID=65069874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811000739.7A Active CN109241900B (en) 2018-08-30 2018-08-30 Wearable device control method and device, storage medium and wearable device

Country Status (1)

Country Link
CN (1) CN109241900B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111125554A (en) * 2019-12-17 2020-05-08 上海秒针网络科技有限公司 Information pushing method and device, storage medium and electronic device
CN111782053B (en) * 2020-08-10 2023-04-28 Oppo广东移动通信有限公司 Model editing method, device, equipment and storage medium
US11670081B2 (en) * 2021-06-03 2023-06-06 At&T Intellectual Property I, L.P. Providing hospitality-related data using an augmented reality display
CN113791689A (en) * 2021-09-15 2021-12-14 云茂互联智能科技(厦门)有限公司 Control method and device of intelligent equipment, storage medium and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103597476A (en) * 2011-06-13 2014-02-19 索尼公司 Information processing device, information processing method, and computer program
CN105893993A (en) * 2016-06-07 2016-08-24 深圳创龙智新科技有限公司 Intelligent glasses
CN106462574A (en) * 2014-06-24 2017-02-22 谷歌公司 Techniques for machine language translation of text from an image based on non-textual context information from the image
CN106507285A (en) * 2016-11-22 2017-03-15 宁波亿拍客网络科技有限公司 A kind of based on the localization method of position basic point, specific markers and relevant device method
CN106500690A (en) * 2016-09-22 2017-03-15 中国电子科技集团公司第二十二研究所 A kind of indoor autonomic positioning method and device based on multi-modal fusion
CN108369630A (en) * 2015-05-28 2018-08-03 视觉移动科技有限公司 Gestural control system and method for smart home

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014204330A1 (en) * 2013-06-17 2014-12-24 3Divi Company Methods and systems for determining 6dof location and orientation of head-mounted display and associated user movements
CN104061934B (en) * 2014-06-10 2017-04-26 哈尔滨工业大学 Pedestrian indoor position tracking method based on inertial sensor
CN104517107A (en) * 2014-12-22 2015-04-15 央视国际网络无锡有限公司 Method for translating image words in real time on basis of wearable equipment
US10366165B2 (en) * 2016-04-15 2019-07-30 Tata Consultancy Services Limited Apparatus and method for printing steganography to assist visually impaired
CN106325504A (en) * 2016-08-16 2017-01-11 合肥东上多媒体科技有限公司 Intelligent digital tour-guide system for museum
CN106774836A (en) * 2016-11-23 2017-05-31 上海擎感智能科技有限公司 Intelligent glasses and its control method, control device
CN107832309B (en) * 2017-10-18 2021-10-01 广东小天才科技有限公司 Language translation method and device, wearable device and storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103597476A (en) * 2011-06-13 2014-02-19 索尼公司 Information processing device, information processing method, and computer program
CN106462574A (en) * 2014-06-24 2017-02-22 谷歌公司 Techniques for machine language translation of text from an image based on non-textual context information from the image
CN108369630A (en) * 2015-05-28 2018-08-03 视觉移动科技有限公司 Gestural control system and method for smart home
CN105893993A (en) * 2016-06-07 2016-08-24 深圳创龙智新科技有限公司 Intelligent glasses
CN106500690A (en) * 2016-09-22 2017-03-15 中国电子科技集团公司第二十二研究所 A kind of indoor autonomic positioning method and device based on multi-modal fusion
CN106507285A (en) * 2016-11-22 2017-03-15 宁波亿拍客网络科技有限公司 A kind of based on the localization method of position basic point, specific markers and relevant device method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BIM技术在轨道交通工程设计中的应用;冀程;《地下空间与工程学报》;20140531;第10卷;第1663-1668页 *
Development of a wearable system for navigating the visually impaired in the indoor environment - a prototype system for fork detection and navigation -;Mari Sekiguchi 等;《The 23rd IEEE International Symposium on Robot and Human Interactive Communication》;20140829;第549-554页 *
指挥控制智能化现状与发展;金欣;《指挥信息系统与技术》;20170831;第8卷(第4期);第10-18页 *
面向可穿戴计算机的电子地图系统的研究与开发;高雪;《中国优秀硕士学位论文全文数据库 基础科学辑》;20140315;第A008-39页 *

Also Published As

Publication number Publication date
CN109241900A (en) 2019-01-18

Similar Documents

Publication Publication Date Title
CN109241900B (en) Wearable device control method and device, storage medium and wearable device
KR102553190B1 (en) Automatic control of wearable display device based on external conditions
US9342610B2 (en) Portals: registered objects as virtualized, personalized displays
US9395811B2 (en) Automatic text scrolling on a display device
CN109254659A (en) Control method, device, storage medium and the wearable device of wearable device
TWI549505B (en) Comprehension and intent-based content for augmented reality displays
CN109145847B (en) Identification method and device, wearable device and storage medium
CN110506249B (en) Information processing apparatus, information processing method, and recording medium
JP2017513093A (en) Remote device control through gaze detection
CN112181152A (en) Advertisement push management method, equipment and application based on MR glasses
CN109224432B (en) Entertainment application control method and device, storage medium and wearable device
CN109040462A (en) Stroke reminding method, apparatus, storage medium and wearable device
CN109259724B (en) Eye monitoring method and device, storage medium and wearable device
CN109068126B (en) Video playing method and device, storage medium and wearable device
CN109255314B (en) Information prompting method and device, intelligent glasses and storage medium
CN109117819B (en) Target object identification method and device, storage medium and wearable device
JP7271909B2 (en) DISPLAY DEVICE AND CONTROL METHOD OF DISPLAY DEVICE
CN109240498B (en) Interaction method and device, wearable device and storage medium
CN109257490A (en) Audio-frequency processing method, device, wearable device and storage medium
CN109145010B (en) Information query method and device, storage medium and wearable device
AU2013200187B9 (en) Automatic text scrolling on a head-mounted display
CN109361727B (en) Information sharing method and device, storage medium and wearable device
CN109144263A (en) Social householder method, device, storage medium and wearable device
CN109101101B (en) Wearable device control method and device, storage medium and wearable device
CN109144265A (en) Display changeover method, device, wearable device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant