CN112164146A - Content control method and device and electronic equipment - Google Patents

Content control method and device and electronic equipment Download PDF

Info

Publication number
CN112164146A
CN112164146A CN202010924329.2A CN202010924329A CN112164146A CN 112164146 A CN112164146 A CN 112164146A CN 202010924329 A CN202010924329 A CN 202010924329A CN 112164146 A CN112164146 A CN 112164146A
Authority
CN
China
Prior art keywords
content
input
user
target
target content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010924329.2A
Other languages
Chinese (zh)
Inventor
陈开�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202010924329.2A priority Critical patent/CN112164146A/en
Publication of CN112164146A publication Critical patent/CN112164146A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a content control method, a content control device and electronic equipment, and belongs to the technical field of communication. The technical problem that the visual impaired people cannot normally use the AR equipment can be solved, the method is applied to the AR equipment, the AR equipment comprises a wearing piece, and the method comprises the following steps: displaying target content in an augmented reality space; receiving a first input of a user; updating a display position of the target content in response to the first input; or sending the target content to the wearing piece and controlling the wearing piece to output the target content. The method and the device are suitable for scenes in which the visual impaired people use the AR equipment.

Description

Content control method and device and electronic equipment
Technical Field
The application belongs to the technical field of communication, and particularly relates to a content control method and device and electronic equipment.
Background
The visual acuity of the visually impaired may be low and the field of vision may be impaired, resulting in the vision of the visually impaired failing to reach normal vision, i.e., the visually impaired may not be able to see anything in front of the eyes.
For example, when the visually impaired person uses Augmented Reality (AR) AR glasses, the visually impaired person may not be able to see the content displayed by the AR glasses. This may result in the inability of visually impaired persons to properly use the AR device.
Disclosure of Invention
The embodiment of the application aims to provide a content control method, a content control device and electronic equipment, and the problem that a visually impaired person cannot normally use an AR (augmented reality) device can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a content control method, which may be applied to an AR device including a wearing part, and the method includes: displaying target content in an augmented reality space; receiving a first input of a user; updating a display position of the target content in response to the first input; or sending the target content to the wearing piece and controlling the wearing piece to output the target content.
In a second aspect, an embodiment of the present application provides a content control apparatus, including a display module, a receiving module, and a processing module; a display module for displaying target content in an augmented reality space; the receiving module is used for receiving a first input of a user; the processing module is used for responding to the first input and updating the display position of the target content; or the target content is sent to the wearing piece in the AR device, and the wearing piece is controlled to output the target content.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, and when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium on which a program or instructions are stored, which when executed by a processor implement the steps of the method according to the first aspect.
In a fifth aspect, embodiments of the present application provide a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In an embodiment of the present application, the content control apparatus may display target content in an augmented reality space of the AR device; and receiving a first input of a user; and updating a display position of the target content in response to the first input; or sending the target content to a wearing piece in the AR device, and controlling the wearing piece to output the target content. Through the scheme, on one hand, the user can trigger the content control device to move the display position of the target content through the first input, so that the user can trigger the content control device to move the target content to the position which is convenient for the user to see clearly according to the actual use requirement of the user. On the other hand, since the user can trigger the content control device to transmit the target content to the wearing piece in the AR device through the first input and control the wearing piece to output the target content, the user can confirm the target content through the content output by the wearing piece. Therefore, the content control method provided by the embodiment of the application can ensure that the visually impaired people can normally use the AR equipment.
Drawings
FIG. 1 is a diagram illustrating a content control method according to an embodiment of the present disclosure;
fig. 2 is one of schematic interfaces of an application of a content control method according to an embodiment of the present application;
fig. 3 is a second schematic interface diagram of an application of the content control method according to the embodiment of the present application;
fig. 4 is a third schematic interface diagram of an application of the content control method according to the embodiment of the present application;
FIG. 5 is a schematic diagram illustrating a positional relationship between the target content and the user before the user performs the first input;
FIG. 6 is a fourth schematic view of an interface applied by the content control method according to the embodiment of the present application;
FIG. 7 is a fifth schematic view of an interface applied by the content control method according to the embodiment of the present application;
FIG. 8 is a diagram of a content control device in an embodiment of the present application;
fig. 9 is a schematic diagram of an electronic device provided in an embodiment of the present application;
fig. 10 is a hardware schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Augmented Reality (AR): the position and angle of the camera image can be calculated in real time and added with the corresponding image, video and three-dimensional model. The aim of augmented reality is to fit a virtual world over the real world and interact with it on a display screen. For example, an AR device such as AR glasses may be provided based on an augmented reality technology, and a camera in the AR device acquires an image of a scene around a user, and displays the acquired image in a display unit (e.g., a display screen) of the AR device after performing three-dimensional modeling processing on the acquired image.
Visual field: the spatial range that can be seen when the user's eyes view an object directly in front with the user's head and eyeballs immobilized.
The content control method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, the content control method provided in the embodiment of the present application may include steps 101 to 103 described below.
Step 101, the content control device displays the target content in the augmented reality space.
In this embodiment of the application, the augmented reality space is an augmented reality space corresponding to the AR device.
Optionally, in this application embodiment, the AR device may be wearable AR devices such as AR glasses and AR helmets, and may specifically be determined according to actual use requirements, which is not limited in this application embodiment.
The AR device in the embodiment of the application comprises a wearing piece, wherein the wearing piece can be any part which can be worn on or held by the hand of a user, such as a ring (finger ring), a glove, a handle and the like; and the wearing piece has an information transmission function and an information output function.
Optionally, in this embodiment of the present application, the target content may be displayed in an AR form.
Optionally, in this embodiment of the present application, the target content may include at least one of the following: characters, pictures, voice, video pictures.
Wherein, characters are the general names of characters and symbols.
For example, the characters may include: letters, Chinese characters, punctuation marks, operation symbols, special symbols, etc.
Step 102, the content control device receives a first input from a user.
Optionally, in this embodiment of the application, the first input may be a voice input or a user gesture input, which may be determined specifically according to actual use requirements, and this embodiment of the application is not limited.
Optionally, in this embodiment of the application, the first input may be an input of a user viewing content in the augmented reality space, or may be an input of a user inputting content in an input area in the augmented reality space, or may be an input of a user modifying content in an input area in the augmented reality space, which may be determined according to actual usage requirements, and this embodiment of the application is not limited.
Optionally, in this embodiment of the application, the input area may be an area for inputting an unlocking password in the unlocking interface; the payment interface can also be an area for inputting a payment password; the conversation interface can also be an area for inputting conversation content; the method can be determined according to actual use requirements, and the embodiment of the application is not limited.
103, the content control device responds to the first input and updates the display position of the target content; or sending the target content to the wearing piece and controlling the wearing piece to output the target content.
The content control method provided by the embodiment of the present application is described in detail below through one possible implementation manner and another possible implementation manner.
One possible implementation: the content control means updates the display position of the target content in response to the first input.
Optionally, in this embodiment of the present application, in a possible implementation manner, assuming that a display position of the target content in the augmented reality space is position a, the content control device may update the display position of the target content in the augmented reality space to position B, where position a is different from position B; for example, assume that the coordinate information of position A is (x)0,y0,z0) And the coordinate information of the position B is (x)1,y1,z1) Then the coordinate information of the position a and the coordinate information of the position B satisfy at least one of: x is the number of0And x1Different, y0And y1Different, z0And z1Different.
It should be noted that, in the embodiment of the present application, it is assumed that the current location of the user is location C, and a distance between location C and the location a is greater than a distance between location C and the location B. That is, after the display position of the target content is updated, the target content is closer to the user, which can facilitate the visually impaired to see the target content.
For example, as shown in (a) of fig. 2, since the content "1" displayed in the form of AR in the augmented reality space is located at the left edge of the visual field 20 of the visually impaired person, and therefore, the visually impaired person cannot confirm whether the content is "1" or "i", then as shown in (b) of fig. 2, in order to see "1" clearly, the visually impaired person may trigger the content control device to move "1" from the left edge of the visual field 20 of the visually impaired person to the central region of the visual field 20 of the visually impaired person through the first input, so that the visually impaired person can see "1" clearly. Wherein the head of the user may remain stationary before and after the user performs the first input.
Further exemplarily, fig. 3 (a) is a schematic diagram of the target content 30 and the user 31 in the augmented reality space, as shown in fig. 3 (a), the user 31 is a visually impaired person, and the user 31 cannot see the target content 30 clearly, then: the user may trigger the content control device to control the target content 30 to move closer to the user (e.g., the control target content moves in the direction indicated by the dashed line head 32) by a first input, i.e., to update the display position of the target content. Fig. 3 (b) is a schematic diagram of the target content 30 and the user 31 after updating the display position.
In this way, the content control device can update the display position of the target content to a position where the visually impaired can see clearly, so that the visually impaired can see clearly the target content, and the reliability of the visually impaired using the AR device can be improved.
In the embodiment of the application, the user can trigger the content control device to move the display position of the target content through the first input, so that the user can trigger the content control device to move the target content to the position convenient for the user to see clearly according to the actual use requirement of the user, and therefore the visually impaired can normally use the AR equipment.
Another possible implementation: the content control device responds to the first input, sends the target content to the wearing piece, and controls the wearing piece to output the target content.
In this embodiment, in another possible implementation manner, after receiving the first input of the user, the content control apparatus may send the target content finger wearing part in response to the first input, and control the wearing part to output the target content in a target manner.
It can be understood that, in the embodiment of the present application, after the wearing part receives the target content, the target content may be output in a target manner.
Optionally, in the embodiment of the present application, the target mode may be any one of a voice output mode and a tactile output mode.
Optionally, in this embodiment of the application, when the target mode is a voice output mode, the user may confirm the target content by hearing. When the target mode is a tactile output mode, the user can confirm the target content by tactile sensation.
In the embodiment of the application, the user can trigger the content control device through the first input to send the target content to the wearing piece in the AR device and control the wearing piece to output the target content, so that the user can confirm the target content through the content output by the wearing piece, and the visual impaired can be ensured to normally use the AR device.
In the content control method provided in the embodiment of the application, on one hand, a user can trigger the content control device to move the display position of the target content through the first input, so that the user can trigger the content control device to move the target content to a position convenient for the user to see according to the actual use requirement of the user. On the other hand, since the user can trigger the content control device to transmit the target content to the wearing piece in the AR device through the first input and control the wearing piece to output the target content, the user can confirm the target content through the content output by the wearing piece. Therefore, the content control method provided by the embodiment of the application can ensure that the visually impaired people can normally use the AR equipment.
Optionally, in this embodiment of the present application, in the above one possible implementation manner, before the step 102, the content control method provided in this embodiment of the present application may further include the following step 104, and the step 102 may be specifically implemented by the following step 102 a.
Step 104, the content control device determines a first distance between the user and the target content.
In this embodiment of the application, the first distance is a distance between the user and the target content before the user performs the first input.
Specifically, the first distance is a distance between a current position of the display screen of the AR device (or the eyes of the user) and a display position of the target content before the user performs the first input. For convenience of description, the current position of the user in the following real-time examples refers to the current position of the display screen of the AR device or the eyes of the user.
Optionally, in this embodiment of the application, the first distance may be preset, for example, the first distance is a fixed value (one), or may be determined by the content control device based on the position of the user and the target content in the augmented reality space (two), which may specifically be determined according to an actual use requirement, and this embodiment of the application is not limited.
Specifically, in the above-mentioned (one), when the user moves, the target content moves following the user, and the distance between the target content and the user is kept unchanged; in this case, the content control apparatus may directly acquire the first distance. In the above (two), when the user moves, the target content does not follow the user movement, that is, the position of the target content in the augmented reality space does not change along with the user movement; in this case, the content control apparatus may first acquire position information of the user in the augmented reality space (hereinafter, referred to as user position information) and position information of the target content in the augmented reality space (hereinafter, referred to as initial position information); the content control device may then determine a first distance between the user and the target content based on the user location information and the initial location information.
The user position information indicates a current position of the user in the augmented reality space, and the initial position information indicates a current display position of the target content in the augmented reality space, for example, a first position described below.
The method for acquiring the user location information and the initial location information may be determined according to actual use requirements, and the embodiment of the present application is not limited.
It should be noted that, the foregoing embodiment is illustrated by taking the step 104 as an example executed before the step 102, in an actual implementation, the step 104 may also be executed after the step 102 is executed and before the step 103 is executed, which may be determined according to actual use requirements, and the embodiment of the present application is not limited.
In step 102a, the content control device updates the display position of the target content from the first position to the second position in response to the first input.
The second position is determined according to the input parameters of the first input, the distance between the second position and the user is a second distance, and the second distance is smaller than the first distance.
In this embodiment, the first position is a display position of the target content in the augmented reality space before the user performs the first input.
In this embodiment, after receiving the first input of the user, the content control apparatus may obtain an input parameter of the first input in response to the first input, and determine the second position according to the input parameter.
Optionally, in this embodiment of the application, when the first input is a user gesture input, the input parameter of the first input may include at least one of a trajectory position of the first input, a trajectory direction of the first input, and a trajectory size (shape, area) of the first input. When the first input is a voice input, the input parameter of the first input is voice information of the voice input, which may be determined according to actual use requirements, and the embodiment of the present application is not limited.
A method of the content control apparatus determining the second position according to the input parameters of the first input is exemplarily described below.
In the embodiment of the present application, when the first input is a user gesture input, the second position may be determined by the following method 1 to method 3, and when the first input is a voice input, the second position may be determined by the following method 4.
Method 1
Optionally, in this embodiment of the application, in the method 1, the content control device may determine a position in the augmented reality space corresponding to the input parameter of the first input as the second position.
For example, as shown in fig. 4 (a), the augmented reality space displays a wall painting 40, a chair 41, and a table 42 in the real space, and a soccer ball 43 (i.e., target content) in the virtual space; the mural 40 is at a distance from the user greater than the distance between the table 42 and the user, and the target content 43 is displayed on the mural 40. Then if the user desires to display the soccer ball 43 on the table 42, the user may click on the table 42 with a gap (i.e., a first input) so that the content control device may determine the position of the table 42 in the augmented reality space as a second position in response to the click input. In this way, as shown in fig. 4 (b), the content control device can update the soccer ball 43 from the position where the mural 40 is located (i.e., the first position) to the position where the table 42 is located (i.e., the second position). I.e., updating the display position of the target content from the first position to the second position.
Method 2
Optionally, in this embodiment of the present application, in method 2, the input parameter of the first input may correspond to the first update distance. The determining, by the content control device, the second position according to the input parameter of the first input may specifically be: and determining a position with a first updating distance between the position and the target direction in the augmented reality space, wherein the target direction is a direction pointing to the user from the first position, as the second position.
For example, the user may draw an arrow downward (relative to a display screen of the AR device) in real space (i.e., the first input is a gesture input that draws the arrow downward), and if the trajectory size of the gesture input corresponds to 2 meters (i.e., the first update distance), the content control apparatus may determine a location 2 meters from the first location as the second location on a connection between the first location and the user.
It should be noted that, in the embodiment of the present application, the first update distance is smaller than the first distance.
Method 3
Optionally, in this embodiment of the present application, in method 2, the input parameter of the first input may correspond to the second update distance. The content control apparatus may determine, as the second location, any one of locations in the augmented reality space at which the distance from the user is the second update distance.
The second update distance is smaller than the first distance, for example, the second update distance may be k times the first distance, where k is a number greater than 0 and smaller than 1.
In the embodiment of the application, the second position is located between the first position and the user.
In the embodiment of the present application, the second position may specifically be any position on one arc surface (hereinafter referred to as a target arc surface). The target arc surface may be an arc surface in a range between the first position and the user, with the user as a circle center, and with the second update distance as a radius.
For example, as shown in fig. 5, it is assumed that, before the first input is received, the display position of the target content 50 is position a (i.e., a first position), and the coordinate information of position a is (a1, b1, 0), the position at which the user is currently located is position C, and the coordinate information of position C is (a2, b2, 0); and the distance between position a and position C is d1 ═ AC |, and the corresponding updated distance d2 of the trajectory shape of the first input is 0.6(k ═ 0.6) × d 1; the range between the first position and the user is the range between the straight line p1 and the straight line p 2; the second position may be any point on the arc WH shown in fig. 5, and the arc WH is an arc centered at the position C and having a radius of d 2.
Further, since the vision of the visually impaired person may be impaired, and thus the content displayed at the arc WU and the arc VH in fig. 5 may not be seen clearly, in an actual implementation, the second position may be any point on the arc UV. In other words, in practical implementations, the second location is any point on the inferior arc UV. The arc UV in fig. 5 is only an illustration and does not form any limitation to the content control method provided in the embodiment of the present application, and in an actual implementation, the angle corresponding to the arc UV may be determined according to actual use requirements.
Method 4
Optionally, in this embodiment of the present application, in method 4, the content control apparatus may determine the second position according to voice information input by voice.
For example, as shown in fig. 4, a soccer ball 43 (i.e., target content) in the virtual space is displayed on a mural 40 in the real space, and at this time, if the user desires to display the soccer ball 43 on a table 42 in the real space, the user may say "display the soccer ball on the table" (i.e., a first input), so that the content control apparatus may determine that the second position is the position where the table is located according to the voice information "displayed on the table" input by the user, so that the content control apparatus may update the display position of the soccer ball 43 from the position where the mural 40 is located to the position where the table top of the table is located.
In addition, in addition to the above methods 1 to 4, the second position may be a preset position in the augmented reality space, and when the user performs the first input, the content control device may directly update the display position of the target content to the preset position, where the preset position may be a position where the visually impaired can clearly see the display content with a certain size in the augmented reality space.
In the embodiment of the application, the content control device can update the display position of the target content from the first position to the first input parameter to determine the second position, so that a user can trigger the content control device to update the display position of the target content to a position meeting the actual use requirement according to the actual use requirement of the user, and the content control method provided by the embodiment of the application not only can ensure that a visually impaired person normally uses the AR device, but also can improve the flexibility of the display content of the AR device.
Optionally, in this embodiment of the present application, in one possible implementation manner described above, the content control device may update the display position of at least part of the target content. The step 102a may be specifically realized by the step 102a1 described below.
Step 102a1, the content control device updates the display position of the first content from the first position to the second position in response to the first input.
The first distance may be a distance between the user and the first content, and the first content may be a content determined according to user input in the target content.
Optionally, in this embodiment of the application, the first content may be content determined by the content control device from the target content according to the first input; or the content control device may determine the content from the target content according to a second input, specifically according to an actual usage requirement, which is not limited in this embodiment of the application, and the second input may be an input performed before the user performs the first input.
Exemplarily, the target content is assumed to comprise an AR keyboard and an AR input box, and the current display position of the AR keyboard is a first position; further, assume that the first input is a long press input (space long press) of the AR input box by the user, the long press input corresponds to the first update distance in terms of the long press input, and the first content determined according to the long press input is the AR input box; then, when the user presses the AR input box for a long time, the content control apparatus may update the display position of the AR input box (i.e., the first content) from a first position to a second position, the second position being a position at a first update distance from the first position in a direction pointing from the first position to the user.
In the embodiment of the application, the user can trigger the content control device to update the display position of part of the target content, so that the flexibility of updating the display position of the content can be improved.
Optionally, in this embodiment of the application, when the first input is a user gesture input, before the step 102a1, the content control method provided in this embodiment of the application may further include the following step 106 and step 107, and the step 102a1 may specifically be implemented by the following step a.
Step 106, the content control device responds to the user gesture input, and determines an input track of the user gesture input.
Optionally, in this embodiment of the present application, the input trajectory of the user gesture input may include at least one of: track direction, track size, and track position (start position and/or end position).
Alternatively, in this embodiment of the application, the content control apparatus may determine the input trajectory of the user gesture input in three ways (i.e., the first way, the second way, and the third way described below).
First mode
In the embodiment of the application, because the AR device can acquire the image of the real space through the camera, and each region on the image corresponds to one spatial region in the augmented reality space, when the user executes a gesture input within the acquisition range of the camera of the AR device, the AR device can acquire the gesture image of the user through the camera thereof, and determine the projection trajectory of the gesture image in the augmented reality space as the input trajectory of the gesture input of the user.
Second mode
Optionally, in this embodiment of the application, in the second mode, the wearing part may include at least one six-axis gyroscope, and the step 104 may be specifically implemented by the following steps 104a to 104 c.
And 104a, the content control device receives first information sent by the wearing piece.
The first information is track size information corresponding to user gesture input, and the first information can be acquired by a six-axis gyroscope.
In the embodiment of the present application, the first information may indicate a trajectory size (shape, length (or area)) of the user gesture input in the augmented reality space.
For example, the shape of the trajectory of the user gesture input in the augmented reality space may be a circle, a pair, an arrow, or the like. The trajectory length of the user gesture input in the augmented reality space may be 30 centimeters, 20 centimeters, and the like; the trajectory area of the user gesture input in the augmented reality space may be 20 square centimeters, 15 square centimeters, or the like; the method can be determined according to actual use requirements, and the embodiment of the application is not limited.
In the embodiment of the present application, a six-axis gyroscope is also referred to as a six-axis motion sensor, and the six-axis gyroscope includes a three-axis gyroscope and a three-axis accelerometer. The method for acquiring the first information through the six-axis gyroscope for the wearing part can be determined according to actual use requirements, and the embodiment of the application is not limited.
And 104b, the content control device acquires second information according to the acquired first image.
And step 104c, the content control device determines an input track of the gesture input of the user according to the first information and the second information.
The first image includes a gesture image of the user, and the second information is corresponding position information of the gesture image in the augmented reality space, that is, the second information may indicate a position of the user gesture input in the augmented reality space.
In the embodiment of the present application, the first image is a depth image.
In this embodiment, the content control device may determine the position of the trajectory size in the augmented reality space according to the second information. In this way, the content control device may determine an input trajectory of the user gesture input according to the first information and the second information.
Optionally, in this embodiment of the application, the first information may further include trajectory direction information corresponding to the user gesture input, where the trajectory direction information may indicate a trajectory direction of the user gesture input in the augmented reality space. For example, the trajectory direction may be a leftward direction relative to a display screen of the AR device, a rightward direction relative to the display screen of the AR device, an upward direction relative to the display screen of the AR device, or a downward direction relative to the display screen of the AR device.
It can be understood that, in the embodiment of the present application, when the first information includes track direction information corresponding to a user gesture input, an input track of the user gesture input includes a track position, a track size, and a track direction.
Third mode
In an embodiment of the present application, the wearing member includes at least one six-axis gyroscope therein.
Optionally, in this embodiment of the application, an indication cursor is displayed in the augmented reality space, and a six-axis gyroscope in the wearing piece corresponds to the indication cursor. For example, when the user moves the wearing piece to the left, the indication cursor also moves to the left; when the user moves the wearing piece to the right, the target cursor also moves to the right.
In this embodiment of the application, in the third manner, the input trajectory of the user gesture input may specifically be a movement trajectory of the target cursor.
In step 107, the content control apparatus determines the content selected in the input track of the first input as the first content.
Step A, the content control device updates the display position of the first content from the first position to the second position.
In this embodiment of the application, the content selected in the input track of the first input may specifically be content in the target content.
Optionally, in this embodiment of the application, the first content may be content in the target content, which is located within an input trajectory range of the user gesture input, for example, when the user gesture input is input by a user to draw a circle in the augmented reality space, the first content may be content in the target content, which is located within the circle. Or, the first content may be content of the target content, which is displayed at the same position as the trajectory position (start position and/or end position) of the user gesture input, for example, when the user gesture input is a click input in the augmented reality space, the first content may be content of the target content, which is displayed at the same position as the click position. Alternatively, the first content may be content corresponding to a trajectory direction and/or a trajectory shape of the user gesture input in the target content. The method can be determined according to actual use requirements, and the embodiment of the application is not limited.
In the embodiment of the application, because the content selected by the input track input by the user gesture can reflect the content required to be input by the user, the content control device determines the content selected by the input track input by the user gesture as the first content, and the accuracy of the content input by the user can be improved.
Optionally, in this embodiment of the application, in another possible implementation manner described above, the wearing part may include N moving components, the target content may include at least one character, and N may be a positive integer. The step 102 can be specifically realized by the step 102b described below.
And 102b, the content control device responds to the first input, sends the target character to the wearing piece, controls the target motion part in the N motion parts, and shakes or slides according to the font of the target character, so that the user confirms the target character through touch.
Wherein the target content may include target characters.
In this embodiment, the target character may be specifically a character determined by the content control device according to the user input, in the at least one character.
For the method for determining the target character by the content control device, reference may be specifically made to the related description of determining the first content in the foregoing embodiment, and details are not repeated here to avoid repetition.
Optionally, in this embodiment of the application, the N motion components may be a component (1) that can slide (specifically, can move or roll) in the wearing part, or may be a component (2) that can vibrate in the wearing part.
Alternatively, in the embodiment of the present application, in the above (1), the content control apparatus may control the target moving part of the N moving parts to slide according to the font style (i.e., the display pattern) of the target character.
Specifically, the content control device may control the target moving part to move/scroll along a target trajectory in the wearing piece, the target trajectory indicating a font of the target character.
Exemplarily, in the above (1), assuming that the target character is "2", as shown in fig. 6, the content control device may control the moving part 61 in the smart glove 60 (i.e., the wearing part) to move/scroll along the target trajectory (the trajectory shown by the dotted arrow in fig. 6), i.e., the content control device controls the target moving part in the wearing part to slide in the font of the target character. In this way, the sliding trajectory of the target moving member may constitute a font of "2", so that the visually impaired person can confirm the target character he or she inputs through the sliding trajectory of the target moving member.
Alternatively, in the embodiment of the present application, in the above (2), the positions of the N moving members are fixed, and the N moving members may vibrate up and down or left and right, and the N moving members may work independently. After receiving the first input, the content control device may control a target moving part of the N moving parts to vibrate according to a font shape of the target character, so that the visually impaired may confirm the target character by a tactile sensation.
Alternatively, in the embodiment of the present application, in the above (2), the N moving parts include N current vibrators, and the N current vibrators may vibrate locally according to a display pattern (e.g., a font) of the content, so that the visually impaired person may perceive the content.
Exemplarily, in the above (2), as shown in (a) of fig. 7, it is assumed that the wearing member is a smart glove 60, and the smart glove 60 includes 49(N ═ 49) moving parts 62 therein; then, if the target character is "L", as shown in fig. 7 (b), the content control device may control 12 vibration parts (for example, 12 black circles shown in fig. 7 (b), that is, target motion parts) in the wearing part, which are formed in an "L" shape, to vibrate, so that it may be ensured that the shape of the trajectory vibrated by the 12 motion parts matches the font shape of the target character "L". In this way, the visually impaired can perceive the vibration of the 12 moving parts to target the character "L".
In the embodiment of the application, the content control device can control the target moving part in the N moving parts to vibrate or slide according to the font of the target character, so that the visually impaired can know the input content by sensing the vibration track or the sliding track of the target moving part, and the visually impaired can accurately input the content.
Optionally, in this embodiment of the application, the target content is a content input by a user in an input region, that is, the target content may specifically be a content displayed in the input region in the augmented reality space. After step 103, the content control method provided in the embodiment of the present application may further include step 108 described below.
In step 108, the content control device executes the target operation on the second content when the first condition is satisfied.
Wherein the first condition may be the following: (3) receiving input of confirming input content by a user; (4) receiving input of a user for modifying target content; (5) and the user input is not received within a preset time (specifically, the preset time can be determined according to actual use requirements, and the embodiment of the application is not limited) after the display position of the target content is updated or the wearable piece is controlled to output the target content.
In this embodiment, the second content is a content currently displayed in the input area.
Optionally, in this embodiment of the application, the second content may specifically be the target content or the content modified by the user from the target content.
Optionally, in this embodiment of the application, after the content control apparatus executes step 103, if the user considers that the target content is the content that the user needs to input, the user may perform an input for confirming the input of the first content or may not perform any input within a preset time period to trigger the content control apparatus to confirm the input of the first content; alternatively, if the user believes that the target content is not what he or she desires to enter (e.g., the first content is missing), the user may trigger the content control device to modify the target content.
Optionally, in this embodiment of the application, in order to facilitate the user to confirm the modified content, the content control device modifies the content in each pair of input areas, that is, may update the display position of the modified content, or control the wearing part to output the modified content in a target manner, so that the user can confirm the modified content.
It is understood that, when the user modifies the target content, the content control device may perform the target operation on the second content if an input confirming that the second content is input by the user is received, or if the user input is not received within a preset time period after the display position of the second content is updated or the second content is output by the control wearing piece.
It can be understood that, in the embodiment of the present application, the second content is content that meets the actual input requirement of the user.
Optionally, in this embodiment of the present application, the input for confirming the input content may be any one of the following: a confirmation gesture input by the user (e.g., drawing a check "√") or an input by the user to a preset key in the wearing article (e.g., pressing on a preset key).
When the input of the input content is confirmed to be the input of a user to a preset key in the wearing piece, the wearing piece can generate a confirmation input instruction after receiving the input of the user to the preset key, and sends the confirmation input instruction to the input control equipment; so that the input control device can confirm the input contents after receiving the confirmation input instruction.
Optionally, in this embodiment of the application, the preset key may be a contact sensor, or may also be any other device that can trigger a confirmation instruction, and may specifically be determined according to an actual use requirement, and this embodiment of the application is not limited.
Optionally, in this embodiment of the application, if the interfaces corresponding to the input areas are different, the target operation performed on the second content by the content control device may also be different.
For example, assume that the input area is a password input area in a login interface of an application program, that is, the input area corresponds to the login interface of the application program; if the second content is the same as the preset login password, displaying an interface of the application program; if the second content is different from the preset login password, outputting prompt information of 'password error, please re-input'.
For another example, assume that the input area is an area for inputting content in an interface of a session, that is, the input area corresponds to the interface of the session; the content control apparatus may transmit the second content through the session when the content control apparatus is satisfying the first condition.
For another example, assuming that the input area is an area for inputting a password in the screen locking interface of the AR device, when the content control apparatus meets the first condition, the content control apparatus may compare the second content with a preset unlocking password, and if the second content is the same as the preset unlocking password, unlock the AR device and display the desktop of the AR device; if the second content is different from the preset unlocking password, outputting prompt information of 'password error, please re-input'.
It should be noted that, in the embodiment of the present application, the foregoing example is only an exemplary illustration of the target operation, and in actual implementation, the target operation may also be any other possible operation, which may be determined according to actual use requirements, and the embodiment of the present application is not limited.
In the embodiment of the application, the content control device may execute the target operation on the second content after the user confirms that the second content displayed in the input area is the content meeting the actual input requirement of the user, so that the execution result of the target operation can be ensured to meet the actual use requirement of the user.
In the content control method provided in the embodiment of the present application, the execution main body may be a content control device, or a control module in the content control device for executing the content control method. The content control device provided in the embodiment of the present application will be described with reference to an example in which the content control device executes a content control method.
As shown in fig. 8, the present embodiment provides a content control device 70, which may include a display module 71, a receiving module 72, and a processing module 73. A display module 71, which may be used to display target content in an augmented reality space; a receiving module 72, which may be used for receiving a first input of a user; a processing module 73, which may be configured to update a display position of the target content in response to the first input; or the target content is sent to the wearing piece in the AR device, and the wearing piece is controlled to output the target content.
In the content control device provided in the embodiment of the application, on one hand, since the user can trigger the content control device to move the display position of the target content through the first input, the user can trigger the content control device to move the target content to a position convenient for the user to see clearly according to the actual use requirement of the user. On the other hand, since the user can trigger the content control device to transmit the target content to the wearing piece in the AR device through the first input and control the wearing piece to output the target content, the user can confirm the target content through the content output by the wearing piece. Therefore, the content control method provided by the embodiment of the application can ensure that the visually impaired people can normally use the AR equipment.
Optionally, in this embodiment of the application, the content control apparatus may further include a determination module. A determining module, which may be configured to determine a first distance between the user and the target content after the display module 71 displays the target content in the augmented reality space and before the receiving module 72 receives the first input of the user; the processing module 73 may be specifically configured to update the display position of the target content from the first position to the second position; the second position is determined according to the input parameters of the first input, the distance between the second position and the user is a second distance, and the second distance is smaller than the first distance.
In the content control apparatus provided in the embodiment of the present application, because the content control apparatus can update the display position of the target content from the first position to the first input parameter to determine the second position, the user can trigger the content control apparatus to update the display position of the target content to the position meeting the actual use requirement according to the actual use requirement of the user, and thus the content control method provided in the embodiment of the present application can not only ensure that the visually impaired people normally use the AR device, but also improve the flexibility of the display content of the AR device.
Optionally, in this embodiment of the application, the processing module 73 may be specifically configured to update the display position of the first content from the first position to the second position, where the first content is a content determined according to the user input in the target content; the first distance may be a distance between the user and the first content before the display position of the first content is updated.
In the content control apparatus provided in the embodiment of the present application, since the user can trigger the content control apparatus to update the display position of a part of the content in the target content, the flexibility of updating the display position of the content can be improved.
Optionally, in this embodiment of the application, the first input is a user gesture input; a determination module, configured to determine an input trajectory of the user gesture input before the processing module 73 updates the display position of the target content; and determining the content selected by the input track as the first content.
In the content control device provided by the embodiment of the application, because the content selected by the input track input by the user gesture can reflect the content required to be input by the user, the content control device determines the content selected by the input track input by the user gesture as the first content, and the accuracy of the content input by the user can be improved.
Optionally, in this embodiment of the application, the wearing piece may include N moving components, the target content may include at least one character, and N may be a positive integer; a processing module 73, specifically configured to control a target moving component of the N moving components to vibrate or slide according to a font of the target character, so that the user confirms the target character through a touch; wherein the target content may include target characters.
In the content control device provided by the embodiment of the application, because the content control device can control the target moving part in the N moving parts, and vibrate or slide according to the font of the target character, the visually impaired can know the input content by sensing the vibration track or the sliding track of the target moving part, so that the visually impaired can accurately input the content.
The content control device in the embodiment of the present application may be an electronic device, or may be a component, an integrated circuit, or a chip in the electronic device. The electronic device may be a mobile electronic device or a non-mobile electronic device. For example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be an AR device, a Personal Computer (PC), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The content control device in the embodiment of the present application may be a device having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The content control device provided in the embodiment of the present application can implement each process implemented by the content control device in the method embodiments of fig. 1 to fig. 7, and is not described here again to avoid repetition.
As shown in fig. 9, an electronic device 200 according to an embodiment of the present application is further provided, which includes a processor 202, a memory 201, and a program or an instruction stored in the memory 201 and executable on the processor 202, where the program or the instruction is executed by the processor 202 to implement the processes of the foregoing content control method embodiment, and can achieve the same technical effects, and no further description is provided herein to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 10 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 1000 includes, but is not limited to: a radio frequency unit 1001, a network module 1002, an audio output unit 1003, an input unit 1004, a sensor 1005, a display unit 1006, a user input unit 1007, an interface unit 1008, a memory 1009, and a processor 1010.
Those skilled in the art will appreciate that the electronic device 1000 may further comprise a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 1010 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The electronic device structure shown in fig. 10 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is not repeated here.
The display unit 1006 may be configured to display the target content in the augmented reality space; a user input unit 1007, which may be used to receive a first input from a user; a processor 1010 operable to update a display position of the target content in response to a first input; or the target content is sent to the wearing piece in the AR device, and the wearing piece is controlled to output the target content.
In the electronic device provided by the embodiment of the application, on one hand, the user can trigger the electronic device to move the display position of the target content through the first input, so that the user can trigger the electronic device to move the target content to a position convenient for the user to see clearly according to the actual use requirement of the user. On the other hand, since the user can trigger the electronic device to send the target content to the wearing piece in the AR device through the first input and control the wearing piece to output the target content, the user can confirm the target content through the content output by the wearing piece. Therefore, the content control method provided by the embodiment of the application can ensure that the visually impaired people can normally use the AR equipment.
Optionally, in this embodiment of the application, the processor 1010 may be configured to, after the display unit 1006 displays the target content in the augmented reality space, determine a first distance between the user and the target content before the user input unit 1007 receives a first input of the user; the processor 1010 may be specifically configured to update the display position of the target content from the first position to the second position; the second position is determined according to the input parameters of the first input, the distance between the second position and the user is a second distance, and the second distance is smaller than the first distance.
In the electronic device provided by the embodiment of the application, because the electronic device can update the display position of the target content from the first position to the first input parameter to determine the second position, the user can trigger the electronic device to update the display position of the target content to the position meeting the actual use requirement according to the actual use requirement of the user, and therefore the content control method provided by the embodiment of the application not only can ensure that the visually impaired people normally use the AR device, but also can improve the flexibility of the display content of the AR device.
Optionally, in this embodiment of the application, the processor 1010 may be specifically configured to update the display position of the first content from the first position to the second position, where the first content is a content determined according to user input in the target content; the first distance may be a distance between the user and the first content before the display position of the first content is updated.
In the electronic device provided by the embodiment of the application, the user can trigger the electronic device to update the display position of part of the content in the target content, so that the flexibility of updating the display position of the content can be improved.
Optionally, in this embodiment of the application, the first input is a user gesture input; a processor 1010 further configured to determine an input trajectory of the user gesture input before the processor 1010 updates the display position of the target content; and determining the content selected by the input track as the first content.
In the electronic device provided by the embodiment of the application, because the content selected by the input track input by the user gesture can reflect the content input by the user requirement, the electronic device determines the content selected by the input track input by the user gesture as the first content, and the accuracy of the content input by the user can be improved.
Optionally, in this embodiment of the application, the wearing piece may include N moving components, the target content may include at least one character, and N may be a positive integer; a processor 1010, specifically configured to control a target moving part of the N moving parts to vibrate or slide according to a font of the target character, so that the user confirms the target character through a tactile sensation; wherein the target content may include target characters.
In the electronic device provided by the embodiment of the application, because the electronic device can control the target moving part in the N moving parts and vibrate or slide according to the font of the target character, a visually impaired person can know the input content of the target moving part by sensing the vibration track or the sliding track of the target moving part, and thus the visually impaired person can be ensured to accurately input the content.
It should be understood that in the embodiment of the present application, the input Unit 1004 may include a Graphics Processing Unit (GPU) 10041 and a microphone 10042, and the Graphics Processing Unit 10041 processes image data of still pictures or videos obtained by an image capturing device (such as a camera) in a video capturing mode or an image capturing mode. The display unit 1006 may include a display panel 10061, and the display panel 10061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 1007 includes a touch panel 10071 and other input devices 10072. The touch panel 10071 is also referred to as a touch screen. The touch panel 10071 may include two parts, a touch detection device and a touch controller. Other input devices 10072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. The memory 1009 may be used to store software programs as well as various data, including but not limited to application programs and operating systems. Processor 1010 may integrate an application processor that handles primarily operating systems, user interfaces, applications, etc. and a modem processor that handles primarily wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 1010.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the foregoing content control method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is a processor in the electronic device in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the foregoing method for controlling content, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A content control method applied to an AR device including a wearing part, the method comprising:
displaying target content in an augmented reality space;
receiving a first input of a user;
updating a display position of the target content in response to the first input; or sending the target content to the wearing piece and controlling the wearing piece to output the target content.
2. The method of claim 1, wherein after the displaying the target content in the augmented reality space, prior to the receiving the first input by the user, the method further comprises:
determining a first distance between a user and the target content;
the updating the display position of the target content comprises:
updating a display position of the target content from a first position to a second position;
and the second position is determined according to the input parameters of the first input, the distance between the second position and the user is a second distance, and the second distance is smaller than the first distance.
3. The method of claim 2, wherein updating the display position of the target content from a first position to a second position comprises:
updating the display position of first content from the first position to the second position, wherein the first content is the content determined according to user input in the target content;
the first distance is specifically a distance between a user and the first content before the display position of the first content is updated.
4. The method of claim 3, wherein the first input is a user gesture input;
before the updating the display position of the first content from the first position to the second position, the method further comprises:
determining an input trajectory of the user gesture input;
and determining the content selected by the input track as the first content.
5. The method of claim 1, wherein the wearing piece includes N moving parts, the target content includes at least one character, N is a positive integer;
the controlling the wearing piece to output the target content comprises:
controlling a target moving part of the N moving parts to vibrate or slide according to the font of a target character so as to enable a user to confirm the target character through touch;
wherein the target content comprises the target character.
6. A content control device is characterized by comprising a display module, a receiving module and a processing module;
the display module is used for displaying target content in an augmented reality space;
the receiving module is used for receiving a first input of a user;
the processing module is used for responding to the first input and updating the display position of the target content; or the target content is sent to a wearing piece in the AR device, and the wearing piece is controlled to output the target content.
7. The apparatus of claim 6, further comprising a determination module;
the determining module is configured to determine a first distance between a user and target content after the display module displays the target content in the augmented reality space and before the receiving module receives a first input of the user;
the processing module is specifically configured to update the display position of the target content from a first position to a second position;
and the second position is determined according to the input parameters of the first input, the distance between the second position and the user is a second distance, and the second distance is smaller than the first distance.
8. The apparatus according to claim 7, wherein the processing module is specifically configured to update a display position of a first content from the first position to the second position, the first content being a content determined according to a user input in the target content;
the first distance is specifically a distance between a user and the first content before the display position of the first content is updated.
9. The apparatus of claim 8, wherein the first input is a user gesture input;
the determining module is further configured to determine an input trajectory of the user gesture input before the processing module updates the display position of the target content; and determining the content selected by the input track as the first content.
10. The apparatus of claim 6, wherein the wearing member includes N moving parts therein, the target content includes at least one character, N is a positive integer;
the processing module is specifically used for controlling a target motion component in the N motion components to vibrate or slide according to the font of a target character, so that a user can confirm the target character through touch;
wherein the target content comprises the target character.
11. An electronic device comprising a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions when executed by the processor implementing the steps of the content control method according to any one of claims 1 to 5.
12. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the content control method according to any one of claims 1 to 5.
CN202010924329.2A 2020-09-04 2020-09-04 Content control method and device and electronic equipment Pending CN112164146A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010924329.2A CN112164146A (en) 2020-09-04 2020-09-04 Content control method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010924329.2A CN112164146A (en) 2020-09-04 2020-09-04 Content control method and device and electronic equipment

Publications (1)

Publication Number Publication Date
CN112164146A true CN112164146A (en) 2021-01-01

Family

ID=73858427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010924329.2A Pending CN112164146A (en) 2020-09-04 2020-09-04 Content control method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112164146A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070273610A1 (en) * 2006-05-26 2007-11-29 Itt Manufacturing Enterprises, Inc. System and method to display maintenance and operational instructions of an apparatus using augmented reality
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content
JP2015037242A (en) * 2013-08-13 2015-02-23 ソニー株式会社 Reception device, reception method, transmission device, and transmission method
JP2016096513A (en) * 2014-11-17 2016-05-26 株式会社ゼンリンデータコム Information processing system, information processing method, and program
CN111190488A (en) * 2019-12-30 2020-05-22 华为技术有限公司 Device control method, communication apparatus, and storage medium
CN111596845A (en) * 2020-04-30 2020-08-28 维沃移动通信有限公司 Display control method and device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070273610A1 (en) * 2006-05-26 2007-11-29 Itt Manufacturing Enterprises, Inc. System and method to display maintenance and operational instructions of an apparatus using augmented reality
US20150040074A1 (en) * 2011-08-18 2015-02-05 Layar B.V. Methods and systems for enabling creation of augmented reality content
JP2015037242A (en) * 2013-08-13 2015-02-23 ソニー株式会社 Reception device, reception method, transmission device, and transmission method
JP2016096513A (en) * 2014-11-17 2016-05-26 株式会社ゼンリンデータコム Information processing system, information processing method, and program
CN111190488A (en) * 2019-12-30 2020-05-22 华为技术有限公司 Device control method, communication apparatus, and storage medium
CN111596845A (en) * 2020-04-30 2020-08-28 维沃移动通信有限公司 Display control method and device and electronic equipment

Similar Documents

Publication Publication Date Title
US11112856B2 (en) Transition between virtual and augmented reality
CN102779000B (en) User interaction system and method
CN108469899B (en) Method of identifying an aiming point or area in a viewing space of a wearable display device
US8823697B2 (en) Tabletop, mobile augmented reality system for personalization and cooperation, and interaction method using augmented reality
US20110227913A1 (en) Method and Apparatus for Controlling a Camera View into a Three Dimensional Computer-Generated Virtual Environment
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
Lee et al. Towards augmented reality driven human-city interaction: Current research on mobile headsets and future challenges
US20090153468A1 (en) Virtual Interface System
CN108885521A (en) Cross-environment is shared
CN107533374A (en) Switching at runtime and the merging on head, gesture and touch input in virtual reality
CN103246351A (en) User interaction system and method
KR102147430B1 (en) virtual multi-touch interaction apparatus and method
US20120229509A1 (en) System and method for user interaction
CN113209601B (en) Interface display method and device, electronic equipment and storage medium
EP3549127A1 (en) A system for importing user interface devices into virtual/augmented reality
WO2022253041A1 (en) Image display method and electronic device
JP6088787B2 (en) Program, information processing apparatus, information processing method, and information processing system
Nor’a et al. Fingertips interaction method in handheld augmented reality for 3d manipulation
Lee et al. Tunnelslice: Freehand subspace acquisition using an egocentric tunnel for wearable augmented reality
Zhang et al. A hybrid 2D–3D tangible interface combining a smartphone and controller for virtual reality
CN113051538B (en) Information unlocking method and electronic equipment
CN110717993A (en) Interaction method, system and medium of split type AR glasses system
CN108803862B (en) Account relation establishing method and device used in virtual reality scene
CN112698723B (en) Payment method and device and wearable equipment
CN112164146A (en) Content control method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination