WO2023093329A1 - 信息输出方法、头戴式显示设备及可读存储介质 - Google Patents
信息输出方法、头戴式显示设备及可读存储介质 Download PDFInfo
- Publication number
- WO2023093329A1 WO2023093329A1 PCT/CN2022/124410 CN2022124410W WO2023093329A1 WO 2023093329 A1 WO2023093329 A1 WO 2023093329A1 CN 2022124410 W CN2022124410 W CN 2022124410W WO 2023093329 A1 WO2023093329 A1 WO 2023093329A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mode
- information
- information component
- entity object
- target entity
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 230000004044 response Effects 0.000 claims abstract description 26
- 230000003993 interaction Effects 0.000 claims abstract description 6
- 230000002452 interceptive effect Effects 0.000 claims description 76
- 230000001960 triggered effect Effects 0.000 claims description 18
- 238000013507 mapping Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 24
- 230000006870 function Effects 0.000 description 15
- 238000004590 computer program Methods 0.000 description 11
- 238000012545 processing Methods 0.000 description 8
- 230000003068 static effect Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000000750 progressive effect Effects 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Definitions
- the present application relates to the technical field of wearable devices, in particular to an information output method, a head-mounted display device and a readable storage medium.
- head-mounted display devices are capable of recommending information on objects within the field of view of the head-mounted display device.
- the existing information recommendation methods often output relevant information of objects within the field of view of the head-mounted display device at one time, resulting in a large amount of information flooding the user's field of view. It can be seen that the existing information recommendation methods are not flexible enough.
- Embodiments of the present application provide an information output method, a head-mounted display device, and a readable storage medium.
- the first aspect of the embodiment of the present application provides an information output method, the method is applicable to a head-mounted display device, and the method further includes:
- controlling the first information component to switch from the first mode to a second mode for display according to the interactive operation; wherein, the information carrying capacity of the second mode is greater than the information of the first mode Carrying capacity.
- the second aspect of the embodiment of the present application provides a head-mounted display device, including:
- An acquisition unit configured to identify a target entity object, and acquire a first information component corresponding to the target entity object
- a display unit for displaying said first information component in a first mode
- a control unit configured to, in response to an interactive operation, control the first information component to switch from the first mode to a second mode for display according to the interactive operation; wherein, the information carrying capacity of the second mode is greater than that of the The information carrying capacity of the first mode.
- the third aspect of the embodiment of the present application provides a head-mounted display device, including:
- the processor invokes the executable program code stored in the memory, and when the executable program code is executed by the processor, the processor implements the method described in the first aspect of the embodiment of the present application .
- the fourth aspect of the embodiment of the present application provides a computer-readable storage medium, on which executable program code is stored.
- executable program code When the executable program code is executed by a processor, the method as described in the first aspect of the embodiment of the present application is implemented. .
- the fifth aspect of the embodiment of the present application discloses a computer program product.
- the computer program product When the computer program product is run on a computer, the computer is made to execute any one of the methods disclosed in the first aspect of the embodiment of the present application.
- the sixth aspect of the embodiment of the present application discloses an application distribution platform.
- the application distribution platform is used to distribute computer program products, wherein, when the computer program product runs on a computer, the computer executes the application disclosed in the first aspect of the embodiment of the present application. any of the described methods.
- FIG. 1 is a schematic diagram of a scene of an information output method disclosed in an embodiment of the present application
- FIG. 2 is a schematic flow diagram of an information output method disclosed in an embodiment of the present application.
- FIG. 3A is a schematic flowchart of another information output method disclosed in the embodiment of the present application.
- Fig. 3B is a schematic diagram of an interface of the display area disclosed in the embodiment of the present application.
- Fig. 3C is another schematic diagram of the interface of the display area disclosed in the embodiment of the present application.
- Fig. 3D is another schematic diagram of the interface of the display area disclosed in the embodiment of the present application.
- Fig. 3E is another schematic diagram of the interface of the display area disclosed in the embodiment of the present application.
- Fig. 3F is another schematic diagram of the interface of the display area disclosed in the embodiment of the present application.
- Fig. 3G is another schematic diagram of the interface of the display area disclosed in the embodiment of the present application.
- FIG. 4 is a structural block diagram of a head-mounted display device disclosed in an embodiment of the present application.
- Fig. 5 is a structural block diagram of another head-mounted display device disclosed in an embodiment of the present application.
- Embodiments of the present application provide an information output method, a head-mounted display device, and a readable storage medium, which can improve the flexibility of information output.
- the head-mounted display device Head Mounted Display, HMD
- HMD head-mounted Display
- the head-mounted display device is used to send optical Signals
- VR virtual reality
- AR augmented reality
- MR mixed reality
- FIG. 1 is a schematic diagram of a scene of an information output method disclosed in an embodiment of the application.
- the scene diagram shown in FIG. 1 may include a head-mounted display device 10 and a user 20 .
- the head-mounted display device 10 identifies the target entity object within its field of view, and obtains the first information component corresponding to the target entity object, and then displays the first information component in the first mode.
- the head-mounted display device 10 In response to an interactive operation by the user 20, the first information component is controlled to switch from a first mode with a small information carrying capacity to a second mode with a large information carrying capacity according to the interactive operation.
- the relevant information about the target entity object presented in the user's field of view can be presented to the user step by step in a display mode with increasing amount of information based on the user's interaction, which can avoid a large amount of information from flooding the user's field of view at one time, and improve the information efficiency. output flexibility.
- FIG. 2 is a schematic flowchart of an information output method disclosed in an embodiment of the present application.
- the information output method as shown in Figure 2 may include the following steps:
- the target entity object is an entity object within the field of view of the head-mounted display device and pre-stored with relevant information.
- the size of the field of view of the head-mounted display device is related to the field of view of the head-mounted display device, and the larger the field of view of the head-mounted display device is, the larger the field of view of the head-mounted display device is.
- the viewing angle of the head-mounted display device indicates the included angle between the edge of the head-mounted display device and the line connecting the user's eyes, and the viewing angle includes a horizontal viewing angle and a vertical viewing angle.
- the target entity object may be a person or object in real space (such as tables and chairs, plants, animals, vehicles, sculptures, buildings, posters, street signs and text, etc.).
- identifying the target entity object may include: collecting an image frame of the entity object within the field of view of the head-mounted display device through an image sensor of the head-mounted display device, and identifying the target entity object according to the image frame.
- the image sensor may include but is not limited to at least one of the following: a monocular camera, a multi-camera, an ultrasonic radar, a laser radar, and the like.
- the image frame may be a two-dimensional image and/or a three-dimensional image, which is not limited in this embodiment of the present application.
- the image frame of the physical object may be a face image of the person.
- the image frame may be a two-dimensional image of the side of the sculpture facing the user, or a three-dimensional image of the sculpture.
- the relevant information of the target entity object may be carried by one or more information components, and the first information component may be any one or more of them.
- the first information component in the first mode is displayed in the display area of the head-mounted display device.
- the first mode represents a display style of the first information component
- the first information component in the first mode uses the display style corresponding to the first mode to display the information carried by the first information component.
- the first information component may include at least multiple modes, and the display styles of related information corresponding to the first information component in different modes are different.
- displaying the first information component in the first mode in the display area of the head-mounted display device includes but not limited to the following methods:
- Method 1 Displaying the first information component in the first mode at a designated position in the display area of the head-mounted display device;
- Method 2 Obtain the relative pose information of the target entity object and the head-mounted display device, determine the mapping position of the target entity object on the display area of the head-mounted display device according to the relative pose information, and The superimposed display information of the first information component is set, the corresponding display position of the first information component is determined, and the first information component in the first mode is displayed at the display position.
- the superimposed display information may indicate a relative positional relationship between the display position and the mapping position.
- the user's field of vision also includes a first information component that the target entity object is in the first mode.
- control the first information component In response to the interactive operation, control the first information component to switch from the first mode to a second mode for display according to the interactive operation; wherein, the information carrying capacity of the second mode is greater than the information carrying capacity of the first mode.
- the interactive operation may include but not limited to be triggered by at least one of the following methods: triggered by operating the control device corresponding to the head-mounted display device, triggered by detecting user gestures, triggered by detecting user voice, and triggered by detecting user Sight gaze duration triggers.
- the first information component in the second mode displays the relevant information of the target entity object carried by the first information component in the display style corresponding to the second mode.
- the target entity object can be identified, and the first information component of the target entity object can be displayed in the first mode with a small information carrying capacity, and when the user triggers an interactive operation, the first information component can be displayed by the first The mode is switched to the second mode with a large amount of information for display.
- This step-by-step information display method with progressive information can effectively reduce the user's visual field The amount of information, thereby improving the flexibility of information output.
- FIG. 3A is a schematic flowchart of another information output method disclosed in the embodiment of the present application.
- the information output method as shown in Figure 3A may include the following steps:
- identifying the target entity object and acquiring the first information component corresponding to the target entity object may include: identifying the user scenario and acquiring scenario information matching the user scenario, the scenario information including multiple entity objects corresponding to object information, and at least one information component corresponding to each object information; obtain a second entity object matching at least one entity object in the scene information, and determine that the second entity object is the target entity object; according to the second entity object corresponding The object information obtains the first information component corresponding to the target entity object.
- the second entity object is an entity object within the field of view of the head-mounted display device
- matching the second entity object with the entity object in the scene information may include: matching the second entity object with the object information of the entity object included in the scene information .
- identifying the user scene may include: identifying the user scene according to data collected by sensors.
- the sensor may include an image sensor and/or an attitude sensor, the image sensor is used to collect the image data of the second entity object in the field of view, and the attitude sensor is used to collect the user's attitude data, the attitude data may include but not limited to motion speed, user posture etc.
- the object information corresponding to any physical object may include the image data of the physical object
- the object information corresponding to the second physical object may include the image data of the second physical object.
- matching the object information corresponding to the second entity object with the object information corresponding to any entity object includes: matching the image data of the second entity with the image data of any entity object.
- identifying the user scene according to the data collected by the sensor may include but not limited to the following methods:
- Mode 1 If the sensor includes an image sensor, identify the user scene according to the image data collected by the image sensor.
- the sensor includes an image sensor, identify the user scene according to the image data collected by the image sensor and the positioning data collected by the positioning module.
- the sensor includes an image sensor and an attitude sensor, when the attitude data collected by the attitude sensor indicates that the user is in a static state, the user scene is recognized according to the image data collected by the image sensor.
- Method 4 If the sensor includes an image sensor and an attitude sensor, when the attitude data collected by the attitude sensor indicates that the user is in a static state, the head-mounted user scene is identified according to the image data collected by the image sensor and the positioning data collected by the positioning module.
- acquiring scene information matching the user scene includes: displaying at least one information identifier matching the user scenario; and acquiring scene information corresponding to the selected information identifier in response to a selection operation of the information identifier.
- At least one information identifier matching the user scenario may also be acquired before displaying at least one information identifier matching the user scenario. Further, acquiring at least one information identifier matching the user scenario may include: acquiring at least one information identifier matching the user scenario according to a pre-built user portrait.
- the user portrait may include but not limited to the user's basic characteristics, social characteristics and preference characteristics.
- the basic characteristics may include at least one of the following: gender, age, and education
- the social characteristics may include at least one of the following: family, social, and occupation
- the preference characteristics may include at least one of the following: hobbies, brand preferences, and product preferences.
- the user scene is a shopping mall
- the user portrait indicates that the user is female, aged 25, unmarried, and fond of food
- at least one information identifier matching the user scene acquired according to the user portrait may be about Food information in the mall.
- at least one information identifier matching the user scene acquired according to the user portrait may be about clothing information in the mall .
- the information identification may include at least one of the following: text, icons, letters, and the like.
- the selection operation of the information identification may include but not limited to at least one of the following triggering methods: triggering by operating the control device corresponding to the head-mounted display device, triggering by detecting user gestures, triggering by detecting user voice, and detecting the user’s gaze Duration trigger.
- acquiring the scene information corresponding to the selected information identifier may include: in response to the selection operation of the information identifier, sending the selected information identification, so that the server searches for the scene information corresponding to the selected information identification, and sends the found scene information to the head-mounted display device.
- At least one information identifier matched with the user scene acquired according to the user portrait is related to clothing information in the mall
- one of the information identifiers indicates a first floor of the clothing area. It can be understood that if the clothing area of the shopping mall has 3 floors, the above at least one information identifier includes 3 information identifiers, these 3 information identifiers respectively represent the first floor of the clothing area, and the target identifier can represent any one of the clothing area.
- the scene information corresponding to the target identifier may include clothing information of each clothing store in the layer.
- obtaining the first information component corresponding to the target entity object according to the object information corresponding to the second entity object may include: obtaining the information component corresponding to the target entity object from the scene information according to the object information corresponding to the second entity object At least one information component: determining the first information component according to the priority level of each information component in the at least one information component corresponding to the target entity object.
- the priority level is used to represent the output order of the information components, and the information component with a higher priority level is output first.
- determining the first information component according to the priority level of each information component in at least one information component corresponding to the target entity object may include but not limited to the following methods:
- Method 1 Among at least one information component corresponding to the target entity object, the information component with the highest priority level is used as the first information component;
- Mode 2 Among at least one information component corresponding to the target entity object, the information component with the highest priority and the second highest priority is used as the first information component.
- At least one information component corresponding to the target entity object may include a The information component of the store's brand information, the information component used to carry the clothing information of the brand A clothing store, the information component used to carry the sales performance of the A brand clothing store, and the information component used to carry the information of the shopping guide of the A brand clothing store .
- the information component carrying the clothing information of the brand A clothing store has the highest priority level
- the information component carrying the information of the shopping guide of the brand A clothing store has the second highest priority level
- the information component carrying the sales performance information of the brand A clothing store The priority level of the component is the lowest
- the priority level of the information component carrying the brand information of the brand A clothing store is the second lowest
- the first information component may be the information component carrying the clothing information of the A brand clothing store.
- displaying the first information component in the first mode may include but not limited to the following ways:
- Mode 1 When there are multiple target entity objects, display the first information components in the first mode respectively corresponding to the specified number of target entity objects.
- displaying the first information components in the first mode respectively corresponding to the specified number of target entity objects may include but not limited to the following methods:
- (1) if the number of target entity objects is greater than the specified number, then according to the actual distance between each target entity object and the head-mounted display device, determine the third target entity object from multiple target entity objects, the first The number of the three target entity objects is a specified number; and, displaying the first information components respectively corresponding to the third target entity objects in the first mode.
- the target entity objects include the shop sign of brand A clothing store, the shop sign of brand B clothing store and the shop sign of brand C clothing store, the specified number is 2, and the clothing store of brand A is closest to the head-mounted display device Brand B clothing store is next, and C brand clothing store is the farthest.
- the displayed first information component in the first mode corresponds to the shop sign of A brand clothing store and the shop sign of B brand clothing store.
- the target entity objects include the shop sign of brand A clothing store, the shop sign of B brand clothing store and the shop sign of C brand clothing store, the specified number is 3, and the displayed first information component in the first mode They correspond to the shop sign of brand A clothing store, the shop sign of brand B clothing store and the shop sign of brand C clothing store.
- Method 2 When there are multiple target entity objects, obtain the distance value between each target entity object and the head-mounted display device; according to the distance value, determine the first An entity object, displaying the first information component in the first mode corresponding to the first entity object.
- the first entity object may be one or more of multiple target entity objects.
- the first entity object may include a target entity object closest to the head-mounted display device.
- the first entity object may include target entity objects closest to and next to the head-mounted display device.
- the display of the first information component is terminated.
- the first specified duration may be obtained through a large number of experiments.
- the first specified duration may be 8s (seconds), 9s, 10s or 12s.
- control the first information component In response to the interactive operation, control the first information component to switch from the first mode to the second mode for display according to the interactive operation; wherein, the information carrying capacity of the second mode is greater than the information carrying capacity of the first mode.
- the first information component may include multiple information components.
- the selected information component may be used as the target information component in response to the selection operation of the information component.
- controlling the first information component to switch from the first mode to the second mode for display according to the interactive operation may include: in response to the interactive operation, controlling the target information component to switch from the first mode to the second mode according to the interactive operation The second mode is displayed.
- the target information component may include one or more of the foregoing multiple information components, which is not limited in this embodiment of the present application.
- the first mode includes a guide mode
- the second mode is a display mode
- the first mode includes a display mode
- the second mode includes an interactive mode
- the display of the first information component in the guide mode includes a guide icon, which is used to prompt the user to obtain relevant information of the target entity object
- the display style of the first information component in the display mode includes a card style, and the card style is used to display at least one of the following elements: picture, text, icon, list and the grid, used to display the relevant information of the target entity object to the user in a card style
- the display style of the first information component in the interactive mode includes a card style and an application jump control, used to display the target entity object to the user in a card style related information, the application jump control is used to jump into the application program corresponding to the first information component.
- the specification of the card style can be defined in advance by the software developer.
- the specifications of the card style may include 2 ⁇ 2, 4 ⁇ 2, 4 ⁇ 4, 2 ⁇ 4 and so on.
- the 2 ⁇ 2 card style is used to display at most two of the following elements: text, picture and icon.
- the 4 ⁇ 2 card style is used to display up to three of the following elements: text, image, icon, and button.
- the 4 ⁇ 4 card style is used to display at most four of the following elements: text, picture, icon, button list, and grid.
- the 2 ⁇ 4 card style is used to display at most three of the following elements: text, image, icon, button, and grid.
- the first information component is an information component carrying clothing information of a brand A clothing store
- the first mode is a guide mode
- the second mode is a display mode
- the guide icon can be a clothes icon
- the card style is used to display clothes store discounts and new arrivals.
- FIG. 3B is a schematic diagram of an interface when the display area displays the first information component in the guide mode.
- the schematic diagram of the interface shown in FIG. 3B may include an icon 30 .
- FIG. 3C is a schematic diagram of the interface when the display area displays the first information component in the display mode.
- the schematic diagram of the interface shown in FIG. 3C may include a card 40 and a card 50, wherein the card 40 is used to display and express clothing discounts The text of the card 50 is used to display the image of the new clothing.
- the first information component is an information component that carries clothing information of a brand A clothing store
- the first mode is a display mode
- the second mode is an interactive mode
- the application jump control may include a
- the online store page of the brand is used to enter the evaluation page about brand A in the N application.
- the M application program is a transaction-based e-commerce platform, such as "Jingdong”, “Taobao”, etc.
- the N application program can be a content-based e-commerce platform such as "Xiaohongshu”.
- FIG. 3D is a schematic diagram of the interface when the display area displays the first information component in the interactive mode.
- the schematic diagram of the interface shown in FIG. 3D may include a card 40, a card 50, an application jump control 60 and an application jump control 70, wherein the card 40 and the card 50 refer to the description of FIG. 3C , which will not be repeated here.
- the text on the application jump control 60 is "walking into the online store", which is used to jump to the online store page of brand A in the M application program, and the text on the application jump control 70 is "Dianping", which is used to jump Go to the review page about Brand A in the N app.
- the interactive operation may include an interactive operation triggered by the user operating a control device corresponding to the head-mounted display device; and/or an interactive operation triggered by detecting a user gesture.
- the interactive operation when the first mode includes the guide mode, includes an interactive operation that can be triggered by detecting the duration of the user's gaze.
- the switch from the guidance mode to the display mode depends on the interactive operation triggered by the user's line of sight, and the switch from the display mode to the interactive mode depends on the user's active triggering of the interactive operation. It can be seen that the mode switching method of the first information component is more in line with the user's habits. Helps increase user stickiness.
- the first information component is switched from the second mode to the first mode.
- the interface in the display area is switched from 3C to 3B.
- the second specified duration may be less than the first specified duration.
- the second specified duration may be 3s, 4s or 5s.
- the target entity object is within the field of view of the head-mounted display device again, and the first information component is switched from the first mode to the second mode again, the second information component in the second mode is displayed.
- the priority level of the second information component is lower than that of the first information component.
- the second information component is the information component with the second highest priority level.
- the first information component is an information component carrying clothing information of a brand A clothing store
- the second information component may be an information component carrying information of a shopping guide of a brand A clothing store.
- the second information component is an information component that carries information about a shopping guide of a brand A clothing store
- the second information component includes a guide mode and a display mode.
- FIG. 3E is a schematic interface diagram when the display area displays the second information component in the guide mode
- the schematic interface diagram shown in FIG. 3E includes an icon 80
- Fig. 3F is a schematic diagram of the interface when the second information component in the display mode is displayed in the display area.
- the schematic diagram of the interface shown in Fig. 3F includes a card 90 and a card 100, wherein both the card 90 and the card 100 include a photo of the shopping guide, a title and responsible content .
- the target entity object is recognized again, when the first information component is switched from the first mode to the second mode again, the second information component in the second mode is displayed, including but not limited to the following methods:
- Method 1 If the target entity object is recognized again, when the first information component is switched from the first mode to the second mode again, stop the display of the first information component in the second mode in the display area, and display that it is in the second mode The second information component of .
- Method 2 If the target entity object is recognized again, when the first information component switches from the first mode to the second mode again, keep the first information component in the second mode displayed in the display area, and display that it is in the second mode The second information component of .
- FIG. 3G includes the first information component 110 in the second mode and the second information component 120 in the second mode.
- step 304 the following steps may also be performed: if the target entity object is recognized again, display the second information component in the first mode in the display area; wherein, the first information component and the second information component different.
- step 304 the following steps may also be performed: if the target entity object is recognized again, display the second information component in the second mode in the display area; wherein, the first information component and the second information component different.
- identifying the target entity object again includes: if the target entity object is detected within the field of view of the head-mounted display device within a third specified time period after the target entity object disappears within the field of view of the head-mounted display device Entity object, then determine that the target entity object is within the field of view again.
- the above method when the user pays attention to the target entity object again, the above method can be used to poll and recommend other information components corresponding to the target entity object, and intelligent update of output information can be realized.
- the target entity object can be identified, and the first information component of the target entity object can be displayed in the first mode with a small information carrying capacity, and when the user triggers an interactive operation, the first information component can be displayed by the first information component
- the first mode is switched to the second mode with a large amount of information for display.
- the first information component can be switched from the second mode back to the first mode, so as to intelligently reduce the amount of information in the user's field of vision, and when the user's interest in the target entity object When interest increases again, other information components of the target entity object are polled, that is, relevant information about the target entity object is intelligently updated, which further improves the flexibility of information output.
- FIG. 4 is a structural block diagram of a head-mounted display device disclosed in an embodiment of the present application. It may include an acquisition unit 401, a display unit 402, and a control unit 403; where:
- An acquisition unit 401 configured to identify a target entity object, and acquire a first information component corresponding to the target entity object;
- a display unit 402 configured to display the first information component in the first mode
- the control unit 403 is configured to, in response to the interactive operation, control the first information component to switch from the first mode to the second mode for display according to the interactive operation; wherein, the information carrying capacity of the second mode is greater than that of the first mode.
- the manner in which the display unit 402 is used to display the first information component in the first mode may specifically include: the display unit 402 is used to display a specified number of Each target entity object of is respectively corresponding to the first information component in the first mode.
- the manner in which the display unit 402 displays the first information component in the first mode may specifically include: the display unit 402 is configured to acquire each The distance value between the target entity object and the user; according to the distance value, determine the first entity object from multiple target entity objects; display the first information component in the first mode corresponding to the first entity object.
- the display unit 402 is further configured to stop displaying the first information component if no interaction operation is detected within a first specified time period after displaying the first information component in the first mode.
- control unit 403 is further configured to, in response to the interactive operation, control the first information component to switch from the first mode to the second mode for display according to the interactive operation, if no target is identified within the second specified time period entity object, switch the first information component from the second mode to the first mode; and/or, if the target entity object is recognized again, when the first information component is switched from the first mode to the second mode again, the display is in The second information component of the second mode; wherein, the first information component and the second information component are different.
- the first information component includes a plurality of information components
- the control unit 403 is further configured to respond to the selection operation of the information component, and use the selected information component as the target information component;
- the control unit 403 is configured to, in response to an interactive operation, control the first information component to switch from the first mode to the second mode for display according to the interactive operation, specifically may include: the control unit 403 is configured to respond to the interactive operation, according to the interactive The operation control target information component is switched from the first mode to the second mode for display.
- the manner in which the obtaining unit 401 is used to identify the target entity object and obtain the first information component corresponding to the target entity object may specifically include: the obtaining unit 401 is used to identify the user scene and obtain the scene matching the user scene Information, the scene information includes object information corresponding to a plurality of entity objects, and at least one information component corresponding to each object information; obtain a second entity object matching at least one entity object in the scene information, and determine that the second entity object is The target entity object: Acquiring the first information component corresponding to the target entity object according to the object information corresponding to the second entity object.
- the manner in which the acquiring unit 401 acquires scene information matching the user scene may specifically include: the acquiring unit 401 is configured to display at least one information identifier matching the user scenario; in response to the selection operation of the information identifier, The scene information corresponding to the selected information identifier is acquired.
- the manner in which the obtaining unit 401 is used to obtain the first information component corresponding to the target entity object according to the object information corresponding to the second entity object may specifically include: the obtaining unit 401 is used to obtain the first information component corresponding to the second entity object according to information, obtaining at least one information component corresponding to the target entity object from the scene information; and determining the first information component according to the priority level of each information component in the at least one information component corresponding to the target entity object.
- the first mode includes a guide mode, and the second mode is a presentation mode; and/or, the first mode includes a presentation mode, and the second mode includes an interactive mode;
- the display style of the first information component in the guide mode includes a guide icon, which is used to prompt the user to obtain the relevant information of the target entity object;
- the display style of the marked information component in the display mode includes a card style, and the card style is used to display at least the following One element: picture, text, icon, list, and grid, used to display relevant information of the target entity object to the user in a card style;
- the display style of the first information component in the interactive mode includes a card style and an application jump control, It is used to display the relevant information of the target entity object to the user in the form of a card, and the application jump control is used to jump into the application program corresponding to the first information component.
- the first mode includes a display mode
- the interactive operation includes an interactive operation triggered by a user operating a control device corresponding to the head-mounted display device; and/or, an interactive operation triggered by detecting a user gesture;
- the first mode includes a guide mode
- the interactive operation includes an interactive operation triggered by detecting a gaze duration of the user.
- FIG. 5 is a structural block diagram of a head-mounted display device disclosed in an embodiment of the present application.
- the head-mounted display device may include a processor 501 and a memory 502 coupled to the processor 501 , wherein the memory 502 may store one or more computer programs.
- Processor 501 may include one or more processing cores.
- the processor 501 uses various interfaces and lines to connect various parts of the entire terminal device, and executes the terminal by running or executing instructions, programs, code sets or instruction sets stored in the memory 502, and calling data stored in the memory 502.
- Various functions and processing data of the device may adopt at least one of Digital Signal Processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA).
- DSP Digital Signal Processing
- FPGA Field-Programmable Gate Array
- PLA Programmable Logic Array
- the processor 501 may integrate one or a combination of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU) and a modem.
- CPU Central Processing Unit
- GPU Graphics Processing Unit
- the CPU mainly handles the operating system, user interface and application programs, etc.; the GPU is used to render and draw the displayed content; the modem is used to handle wireless communication. It can be understood that the above modem may also not be integrated into the processor 501, but implemented by a communication chip alone.
- the memory 502 may include random access memory (Random Access Memory, RAM), and may also include read-only memory (Read-Only Memory, ROM). Memory 502 may be used to store instructions, programs, codes, sets of codes, or sets of instructions.
- the memory 502 may include a program storage area and a data storage area, wherein the program storage area may store instructions for implementing an operating system and instructions for implementing at least one function (such as a touch function, a sound playback function, an image playback function, etc.) , instructions for implementing the foregoing method embodiments, and the like.
- the storage data area can also store data created by the terminal device during use, and the like.
- the processor 501 also has the following functions:
- the first information component is controlled to switch from the first mode to the second mode for display according to the interactive operation; wherein, the information carrying capacity of the second mode is greater than the information carrying capacity of the first mode.
- the processor 501 also has the following functions:
- the first information components in the first mode respectively corresponding to the specified number of target entity objects are displayed.
- the processor 501 also has the following functions:
- When there are multiple target entity objects obtain the distance value between each target entity object and the user; determine the first entity object from the multiple target entity objects according to the distance value; display the first entity The object corresponds to the first information component in the first mode.
- the processor 501 also has the following functions:
- the processor 501 also has the following functions:
- the first information component is switched from the second mode to the first mode; and/or, if the target entity object is identified again, when the first information component is changed from the second When the first mode is switched to the second mode, the second information component in the second mode is displayed; wherein, the first information component and the second information component are different.
- the first information component may include multiple information components, and the processor 501 also has the following functions:
- the target information component is controlled to switch from the first mode to the second mode for display according to the interactive operation.
- the processor 501 also has the following functions:
- the scene information includes object information corresponding to multiple entity objects, and at least one information component corresponding to each object information; obtain the scene information that matches at least one entity object in the scene information.
- the second entity object is to determine that the second entity object is the target entity object; and obtain the first information component corresponding to the target entity object according to the object information corresponding to the second entity object.
- the processor 501 also has the following functions:
- the scene information corresponding to the selected information identifier is acquired.
- the processor 501 also has the following functions:
- the first mode includes a guide mode, and the second mode is a presentation mode; and/or, the first mode includes a presentation mode, and the second mode includes an interactive mode;
- the display style of the first information component in the guide mode includes a guide icon, which is used to prompt the user to obtain the relevant information of the target entity object;
- the display style of the marked information component in the display mode includes a card style, and the card style is used to display at least the following One element: picture, text, icon, list, and grid, used to display relevant information of the target entity object to the user in a card style;
- the display style of the first information component in the interactive mode includes a card style and an application jump control, It is used to display the relevant information of the target entity object to the user in the form of a card, and the application jump control is used to jump into the application program corresponding to the first information component.
- the first mode includes a presentation mode
- the interactive operation includes an interactive operation triggered by the user operating a control device corresponding to the head-mounted display device; and/or an interactive operation triggered by detecting a user gesture
- the first mode includes a guide mode
- the interactive operation includes an interactive operation triggered by detecting a gaze duration of the user.
- the embodiment of the present application discloses a computer-readable storage medium, which stores a computer program, wherein when the computer program is executed by a processor, the methods described in the foregoing embodiments are implemented.
- the embodiment of the present application discloses a computer program product, which includes a non-transitory computer-readable storage medium storing a computer program, and the computer program can be executed by a processor to implement the methods described in the foregoing embodiments.
- the processes in the methods of the above embodiments can be realized through computer programs to instruct related hardware, and the programs can be stored in a non-volatile computer-readable storage medium When the program is executed, it may include the processes of the embodiments of the above-mentioned methods.
- the storage medium may be a magnetic disk, an optical disk, a ROM, or the like.
- Non-volatile memory may include ROM, Programmable ROM (PROM), Erasable PROM (Erasable PROM, EPROM), Electrically Erasable PROM (Electrically Erasable PROM, EEPROM) or flash memory.
- Volatile memory can include random access memory (RAM), which acts as external cache memory.
- RAM can take many forms, such as static RAM (Static RAM, SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (synchronous DRAM, SDRAM), double data rate SDRAM (Double data rate Data Rate SDRAM, DDR SDRAM), enhanced SDRAM (Enhanced Synchronous DRAM, ESDRAM), synchronous link DRAM (Synchlink DRAM, SLDRAM), memory bus direct RAM (Rambus DRAM, RDRAM) and direct memory bus dynamic RAM (Direct Rambus DRAM) , DRDRAM).
- static RAM Static RAM, SRAM
- dynamic RAM Dynamic Random Access Memory
- SDRAM synchronous DRAM
- double data rate SDRAM Double data rate Data Rate SDRAM, DDR SDRAM
- enhanced SDRAM Enhanced Synchronous DRAM, ESDRAM
- synchronous link DRAM Synchlink DRAM, SLDRAM
- memory bus direct RAM Rabus DRAM, RDRAM
- Direct Rambus DRAM Direct Rambus DRAM
- Each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
- the above-mentioned integrated units are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-accessible memory.
- the technical solution of the present application in essence, or the part that contributes to the prior art, or all or part of the technical solution, can be embodied in the form of a software product, and the computer software product is stored in a memory , including several requests to make a computer device (which may be a personal computer, server, or network device, etc., specifically, a processor in the computer device) execute some or all of the steps of the above-mentioned methods in various embodiments of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本申请实施例公开了一种信息输出方法、头戴式显示设备及可读存储介质,该信息输出方法包括:识别目标实体对象,并获取目标实体对象对应的第一信息组件;显示处于第一模式的第一信息组件;响应于交互操作,根据该交互操作控制第一信息组件由第一模式切换为第二模式进行显示;其中,第二模式的信息承载量大于第一模式的信息承载量。
Description
本申请要求于2021年11月23日提交、申请号为202111394126.8、发明名称为“信息输出方法、头戴式显示设备及可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及可穿戴设备技术领域,尤其涉及一种信息输出方法、头戴式显示设备及可读存储介质。
随着头戴式显示设备的不断发展,多数头戴式显示设备大都可对该头戴式显示设备的视野内的对象进行信息推荐。但在实践中发现,现有的信息推荐方法往往是将头戴式显示设备的视野内的对象的相关信息一次性输出,从而导致大量信息充斥在用户的视野中。可见,现有的信息推荐方式不够灵活。
发明内容
本申请实施例提供了一种信息输出方法、头戴式显示设备及可读存储介质。
本申请实施例第一方面提供了一种信息输出方法,所述方法适用于头戴式显示设备,所述方法还包括:
识别目标实体对象,并获取所述目标实体对象对应的第一信息组件;
显示处于第一模式的所述第一信息组件;
响应于交互操作,根据所述交互操作控制所述第一信息组件由所述第一模式切换为第二模式进行显示;其中,所述第二模式的信息承载量大于所述第一模式的信息承载量。
本申请实施例第二方面提供了一种头戴式显示设备,包括:
获取单元,用于识别目标实体对象,并获取所述目标实体对象对应的第一信息组件;
显示单元,用于显示处于第一模式的所述第一信息组件;
控制单元,用于响应于交互操作,根据所述交互操作控制所述第一信息组件由所述第一模式切换为第二模式进行显示;其中,所述第二模式的信息承载量大于所述第一模式的信息承载量。
本申请实施例第三方面提供了一种头戴式显示设备,包括:
存储有可执行程序代码的存储器;
以及所述存储器耦合的处理器;
所述处理器调用所述存储器中存储的所述可执行程序代码,所述可执行程序代码被所述处理器执行时,使得所述处理器实现如本申请实施例第一方面所述的方法。
本申请实施例第四方面提供一种计算机可读存储介质,其上存储有可执行程序代码,所述可执行程序代码被处理器执行时,实现如本申请实施例第一方面所述的方法。
本申请实施例第五方面公开一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得该计算机执行本申请实施例第一方面公开的任意一种所述的方法。
本申请实施例第六方面公开一种应用发布平台,该应用发布平台用于发布计算机程序产品,其中,当该计算机程序产品在计算机上运行时,使得该计算机执行本申请实施例第一方面公开的任意一种所述的方法。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和有益效果将从说明书、附图以及权利要求书中体现。
为了更清楚地说明本申请实施例技术方案,下面将对实施例和现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,还可以根据这些附图获得其它的附图。
图1是本申请实施例公开的一种信息输出方法的场景示意图;
图2是本申请实施例公开的一种信息输出方法的流程示意图;
图3A是本申请实施例公开的另一种信息输出方法的流程示意图;
图3B是本申请实施例公开的显示区域的一种界面示意图;
图3C是本申请实施例公开的显示区域的另一种界面示意图;
图3D是本申请实施例公开的显示区域的又一种界面示意图;
图3E是本申请实施例公开的显示区域的又一种界面示意图;
图3F是本申请实施例公开的显示区域的又一种界面示意图;
图3G是本申请实施例公开的显示区域的又一种界面示意图;
图4是本申请实施例公开的一种头戴式显示设备的结构框图;
图5是本申请实施例公开的另一种头戴式显示设备的结构框图。
本申请实施例提供了一种信息输出方法、头戴式显示设备及可读存储介质,可以提高信息输出的灵活性。
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述,显然,所描述的实施例 仅仅是本申请一部分的实施例,而不是全部的实施例。基于本申请中的实施例,都应当属于本申请保护的范围。
可以理解的是,本申请实施例中所涉及的头戴式显示设备(Head Mounted Display,HMD)是一种传戴在用户的头上的显示设备,头戴式显示设备用于向眼睛发送光学信号,可以实现虚拟现实(VR)、增强现实(AR)、混合现实(MR)等不同效果,具体可以应用于学习、娱乐、医疗及军事等领域。
下面以实施例的方式,对本申请技术方案做进一步的说明:
请参阅图1,图1是申请实施例公开的一种信息输出方法的场景示意图。如图1所示的场景示意图可以包括头戴式显示设备10及用户20。首先,头戴式显示设备10识别其视野内的目标实体对象,并获取目标实体对象对应的第一信息组件,然后,显示处于第一模式的第一信息组件,最后,头戴式显示设备10响应于用户20交互操作,根据该交互操作控制第一信息组件由信息承载量小的第一模式切换为信息承载量大的第二模式。通过实施该方法,呈现在用户视野中的关于目标实体对象的相关信息可以基于用户的交互,以信息量递增的展示方式分步呈现给用户,可以避免大量信息一次性充斥用户视野,提高了信息输出的灵活性。
请参阅图2,图2是本申请实施例公开的一种信息输出方法的流程示意图。如图2所示的信息输出方法可以包括以下步骤:
201、识别目标实体对象,并获取目标实体对象对应的第一信息组件。
其中,目标实体对象为处于头戴式显示设备的视野内,且预先存储有相关信息的实体对象。需要说明的是,头戴式显示设备的视野大小与该头戴式显示设备的视场角相关,头戴式显示设备的视场角越大,头戴式显示设备的视野越大。头戴式显示设备的视场角指示的是头戴式显示设备的边缘与用户眼睛连线的夹角,视场角包括水平视场角以及垂直视场角。
在本申请实施例中,目标实体对象可以是现实空间中的人物或物体(例如桌椅、植物、动物、车辆、雕塑、建筑物、海报、路牌及文字等)。
在一些实施例中,识别目标实体对象,可以包括:通过头戴式显示设备的图像传感器,采集头戴式显示设备的视野内的实体对象的图像画面,根据图像画面识别目标实体对象。其中,图像传感器可以包括但不限于以下至少一种:单目相机、多目相机、超声波雷达及激光雷达等。
其中,图像画面可以是二维画面和/或三维画面,本申请实施例不做限定。示例性的,在实体对象为人物时,实体对象的图像画面可以是人物的脸部图像。在实体对象为雕塑时,图像画面可以是该雕塑朝向用户的一侧的二维图像,或者该雕塑的三维图像。
在本申请实施例中,目标实体对象的相关信息可以依赖一个或多个信息组件承载,第一信息组件可以为其中的任一或多个。
202、显示处于第一模式的第一信息组件。
其中,在头戴式显示设备的显示区域,显示处于第一模式的第一信息组件。
在本申请实施例中,第一模式表征的是第一信息组件的一种展示样式,处于第一模式的第一信息组件以该第一模式对应的展示样式,展示该第一信息组件所承载的目标实体对象的相关信息。其中,第一信息组件可包括至少多种模式,且不同模式下的第一信息组件对应的相关信息的展示样式不一样。
在一些实施例中,在头戴式显示设备的显示区域,显示处于第一模式的第一信息组件,包括但不限于以下方式:
方式1、在头戴式显示设备的显示区域的指定位置,显示处于第一模式的第一信息组件;
方式2、获取目标实体对象与头戴式显示设备的相对位姿信息,根据该相对位姿信息,确定目标实体对象在头戴式显示设备的显示区域上的映射位置,并根据映射位置及预先设定的第一信息组件的叠加显示信息,确定第一信息组件对应的显示位置,以及在显示位置显示处于第一模式的第一信息组件。其中,叠加显示信息可以指示显示位置与映射位置的相对位置关系。
可以理解的是,在步骤202之后,用户的视野中除了目标实体对象之外,还包括目标实体对象处于第一模式的第一信息组件。
203、响应于交互操作,根据交互操作控制第一信息组件由第一模式切换为第二模式进行显示;其中,第二模式的信息承载量大于第一模式的信息承载量。
在一些实施例中,交互操作可以包括但不限于通过以下至少一种方式触发:通过操作头戴式显示设备对应的控制装置触发、通过检测用户手势触发、通过检测用户语音触发,以及通过检测用户视线注视时长触发。
其中,处于第二模式的第一信息组件以该第二模式对应的展示样式,展示该第一信息组件所承载的目标实体对象的相关信息。
通过上述方法,可以对目标实体对象进行识别,并将目标实体对象的第一信息组件以信息承载量小的第一模式进行显示,以及在用户触发交互操作时,将第一信息组件由第一模式切换为信息承载量较大的第二模式进行显示,这种信息量递进式的信息分步展示方式,相较于将实体对象的相关信息一次性输出来说,可以有效降低用户视野中的信息量,进而提高了信息输出的灵活性。
请参阅图3A,图3A是本申请实施例公开的另一种信息输出方法的流程示意图。如图3A所示的信息输出方法可以包括以下步骤:
301、识别目标实体对象,并获取目标实体对象对应的第一信息组件。
在一些实施例中,识别目标实体对象,并获取目标实体对象对应的第一信息组件,可以包括:识别用户场景并获取与该用户场景匹配的场景信息,该场景信息包括多个实体对象分别对应的对象信息,以及与每个对象信息对应的至少一个信息组件;获取与该场景信息中至少一个实体对象匹配的第二实体对象,确定第二实体对象为目标实体对象;根据第二实体对象对应的对象信息获 取目标实体对象对应的第一信息组件。
其中,第二实体对象为头戴式显示设备的视野内的实体对象,第二实体对象与场景信息中的实体对象匹配可以包括:第二实体对象与场景信息中包括的实体对象的对象信息匹配。
在一些实施例中,识别用户场景,可以包括:根据传感器采集的数据识别用户场景。其中,传感器可以包括图像传感器和/或姿态传感器,图像传感器用于采集视野内的第二实体对象的图像数据,姿态传感器用于采集用户的姿态数据,姿态数据可以包括但不限于运动速度、用户姿势等。
在一些实施例中,任一实体对象对应的对象信息可以包括该实体对象的图像数据,第二实体对象对应的对象信息可以包括第二实体对象的图像数据。进一步的,第二实体对象对应的对象信息与任一实体对象对应的对象信息匹配,包括:第二实体的图像数据与任一实体对象的图像数据匹配。
在一些实施例中,根据传感器采集的数据识别用户场景,可以包括但不限于以下方式:
方式1、若传感器包括图像传感器,则根据图像传感器采集的图像数据,识别用户场景。
方式2、若传感器包括图像传感器,则根据图像传感器采集的图像数据及定位模块采集的定位数据,识别用户场景。
方式3、若传感器包括图像传感器和姿态传感器,则在姿态传感器采集的姿态数据指示用户处于静止状态时,根据图像传感器采集的图像数据,识别用户场景。
方式4、若传感器包括图像传感器和姿态传感器,则在姿态传感器采集的姿态数据指示用户处于静止状态时,根据图像传感器采集的图像数据及定位模块采集的定位数据,识别头戴式用户场景。
在一些实施例中,获取与该用户场景匹配的场景信息,包括:显示与该用户场景匹配的至少一个信息标识;响应于信息标识的选取操作,获取被选取的信息标识对应的场景信息。
在一些实施例中,显示与该用户场景匹配的至少一个信息标识之前,还可以获取与该用户场景匹配的至少一个信息标识。进一步的,获取与该用户场景匹配的至少一个信息标识,可以包括:根据预先构建的用户画像,获取与该用户场景匹配的至少一个信息标识。其中,用户画像可以包括但不限于用户的基本特征、社会特征及偏好特征。其中,基本特征可以包括以下至少一个:性别、年龄以及教育等;社会特征可以包括以下至少一个:家庭、社交以及职业等;偏好特征可以包括以下至少一个:兴趣爱好、品牌偏好以及产品偏好等。
可以理解的是,若识别出用户场景为商场,用户画像指示该用户性别为女、年龄25岁、未婚、爱好美食,则根据用户画像获取的与该用户场景匹配的至少一个信息标识可以是关于商场中美食信息的。若识别出用户场景为商场,用 户画像指示该用户性别为女、年龄25岁、未婚、爱好衣服,则根据用户画像获取的与该用户场景匹配的至少一个信息标识可以是关于商场中服装信息的。通过实施该方法,基于用户画像进行信息标识的推荐,可以极大提高推荐信息精准度。
在一些实施例中,信息标识可以包括以下至少一种:文本、图标及字母等。其中,信息标识的选取操作可以包括但不限于以下至少一种方式触发:通过操作头戴式显示设备对应的控制装置触发、通过检测用户手势触发、通过检测用户语音触发,以及通过检测用户视线注视时长触发。
在一些实施例中,响应于信息标识的选取操作,获取被选取的信息标识对应的场景信息,可以包括:响应于信息标识的选取操作,向与头戴式显示设备连接的服务器发送被选取的信息标识,以使服务器查找该被选取的信息标识对应的场景信息,并将查找到的该场景信息向头戴式显示设备发送。
示例性的,若根据用户画像获取的与用户场景匹配的至少一个信息标识是关于商场中服装信息的,其中的一个信息标识表示服装区的一层。可以理解的是,若该商场的服装区有3层,则上述至少一个信息标识包括3个信息标识,这3个信息标识分别表示服装区的一层,目标标识可表示该服装区的任一层,目标标识对应的场景信息可以包括该层的各服装店的服装信息。
在一些实施例中,根据第二实体对象对应的对象信息获取目标实体对象对应的第一信息组件,可以包括:根据第二实体对象对应的对象信息,从场景信息中,获取目标实体对象对应的至少一个信息组件;根据目标实体对象对应的至少一个信息组件中,每一信息组件的优先级等级,确定第一信息组件。其中,优先级等级用于表征信息组件输出顺序,优先级等级越高的信息组件越最先输出。
在一些实施例中,根据目标实体对象对应的至少一个信息组件中,每一信息组件的优先级等级,确定第一信息组件,可以包括但不限于以下方式:
方式1、将目标实体对象对应的至少一个信息组件中,优先级等级最高的信息组件,作为第一信息组件;
方式2、将目标实体对象对应的至少一个信息组件中,优先级等级最高及次高的信息组件,作为第一信息组件。
示例性的,若目标标识表示该服装区的任一层,目标实体对象为处于该层的A品牌服装店的店招,则目标实体对象对应的至少一个信息组件可以包括用于承载A品牌服装店的品牌信息的信息组件、用于承载A品牌服装店的服装信息的信息组件、用于承载A品牌服装店的销售业绩的信息组件及用于承载A品牌服装店的导购员信息的信息组件。其中,用于承载A品牌服装店的服装信息的信息组件的优先级等级最高、承载A品牌服装店的导购员信息的信息组件的优先级等级次高,承载A品牌服装店的销售业绩的信息组件的优先级等级最低,承载A品牌服装店的品牌信息的信息组件的优先级等级次低, 第一信息组件可以为承载A品牌服装店的服装信息的信息组件。
302、显示处于第一模式的第一信息组件。
在一些实施例中,显示处于第一模式的第一信息组件可以包括但不限于以下方式:
方式1、在目标实体对象的个数为多个的情况下,显示指定数量的各目标实体对象分别对应的处于第一模式的第一信息组件。
在一些实施例中,显示指定数量的各目标实体对象分别对应的处于第一模式的第一信息组件,可以包括但不限于以下方式:
(1)、若目标实体对象的个数大于指定数量,则根据每个目标实体对象和头戴式显示设备之间的实际距离,从多个目标实体对象中确定出第三目标实体对象,第三目标实体对象的个数为指定数量;以及,显示各第三目标实体对象分别对应的处于第一模式的第一信息组件。
示例性的,若目标实体对象包括A品牌服装店的店招、B品牌服装店的店招以及C品牌服装店的店招,指定数量为2个,A品牌服装店距离头戴式显示设备最近、B品牌服装店次之,C品牌服装店最远,显示的处于第一模式的第一信息组件对应A品牌服装店的店招和B品牌服装店的店招。
(2)、若目标实体对象的个数小于或等于指定数量,则显示该多个目标实体对象中的各目标实体对象分别对应的处于第一模式的第一信息组件。
示例性的,若目标实体对象包括A品牌服装店的店招、B品牌服装店的店招以及C品牌服装店的店招,指定数量为3个,显示的处于第一模式的第一信息组件分别对应A品牌服装店的店招、B品牌服装店的店招以及C品牌服装店。
方式2、在目标实体对象的个数为多个的情况下,获取每一目标实体对象与头戴式显示设备之间的距离值;根据该距离值,从多个目标实体对象中确定出第一实体对象,显示第一实体对象对应的处于第一模式的第一信息组件。其中,第一实体对象可以是多个目标实体对象中的一个或多个。
在一些实施例中,第一实体对象可以包括距离头戴式显示设备最近的目标实体对象。
在一些实施例中,第一实体对象可以包括距离头戴式显示设备最近和次近的目标实体对象。
在一些实施例中,在步骤302之后,若在第一指定时长内未检测到交互操作,则终止显示第一信息组件。其中,第一指定时长可以是大量实验得到的。示例性的,第一指定时长可以是8s(秒)、9s、10s或12s。通过实施该方法,若第一信息组件显示之后的第一指定时间内,用户未触发对第一信息组件的交互操作,则主动终止第一信息组件的显示,可以使得用户不感兴趣的信息及时消失在用户视野中。
303、响应于交互操作,根据交互操作控制第一信息组件由第一模式切换 为第二模式进行显示;其中,第二模式的信息承载量大于第一模式的信息承载量。
在一些实施例中,第一信息组件可以包括多个信息组件,步骤302之后,还可以响应于信息组件的选取操作,并将被选取的信息组件作为目标信息组件。
进一步的,响应于交互操作,根据交互操作控制第一信息组件由第一模式切换为第二模式进行显示,可以包括:响应于交互操作,根据该交互操作控制目标信息组件由第一模式切换为第二模式进行显示。
其中,目标信息组件可以包括上述多个信息组件中的一个或多个,本申请实施例不做限定。
在一些实施例中,第一模式包括引导模式,第二模式为展示模式;和/或,第一模式包括展示模式,第二模式包括交互模式;其中,处于引导模式的第一信息组件的展示样式包括引导图标,用于提示用户获取目标实体对象的相关信息;处于展示模式的第一信息组件的展示样式包括卡片样式,卡片样式用于显示以下至少一种元素:图片、文本、图标、列表及宫格,用于以卡片样式向用户展示目标实体对象的相关信息;处于交互模式的第一信息组件的展示样式包括卡片样式及应用跳转控件,用于以卡片样式向用户展示目标实体对象的相关信息,应用跳转控件用于跳转进入与第一信息组件对应的应用程序。
需要说明的是,卡片样式的规格可以预先由软件开发人员的定义。示例性的,卡片样式的规格可以包括2×2、4×2、4×4以及2×4等。其中,2×2规格的卡片样式用于显示以下至多两种元素:文本、图片及图标。4×2规格的卡片样式用于显示以下至多三种元素:文本、图片、图标及按钮。4×4规格的卡片样式用于显示以下至多四种元素:文本、图片、图标、按钮列表、宫格。,2×4规格的卡片样式用于显示以下至多三种元素:文本、图片、图标、按钮以及宫格。
示例性的,若第一信息组件为承载A品牌服装店的服装信息的信息组件,第一模式为引导模式,第二模式为展示模式、则引导图标可以为衣服图标,卡片样式用于显示服装店的折扣信息及新品服装。
请参阅图3B,图3B是显示区域显示处于引导模式的第一信息组件时的界面示意图,如图3B所示的界面示意图可以包括图标30。请参阅图3C,图3C是显示区域显示处于展示模式的第一信息组件时的界面示意图,如图3C所示的界面示意图可以包括卡片40和卡片50,其中,卡片40用于显示表述服装折扣的文本,卡片50用于显示新品服装的图像。
示例性的,第一信息组件为承载A品牌服装店的服装信息的信息组件,第一模式为展示模式,第二模式为交互模式,则应用跳转控件可以包括用于进入M应用程序中A品牌的网店页面,用于进入N应用程序中关于A品牌的评价页面。其中,M应用程序为交易型电商平台,例如“京东”、“淘宝”等;N应 用程序可以为内容型电商平台例如“小红书”。
图3D是显示区域显示处于交互模式的第一信息组件时的界面示意图,如图3D所示的界面示意图可以包括卡片40、卡片50、应用跳转控件60以及应用跳转控件70,其中,卡片40和卡片50参见图3C的描述,此处不再赘述。应用跳转控件60上的文本为“走进网店”,用于跳转到M应用程序中A品牌的网店页面,应用跳转控件70上的文本为“大众点评”,用于跳转到N应用程序中关于A品牌的评价页面。
在一些实施例中,在第一模式包括展示模式时,交互操作可以包括用户通过操作头戴式显示设备对应的控制装置触发的交互操作;和/或,通过检测用户手势触发的交互操作。
在一些实施例中,在第一模式包括引导模式时,交互操作包括可以通过检测用户视线注视时长触发的交互操作。
其中,引导模式到展示模式的切换依赖用户视线触发交互操作实现,展示模式到交互模式的切换依赖用户主动触发交互操作实现,可见,这种第一信息组件的模式切换方式更加贴合用户习惯,有助于提高用户粘稠度。
304、若在第二指定时长内未识别到目标实体对象,则将第一信息组件由第二模式切换为第一模式。
可以理解的是,在第二指定时长内目标实体对象不再处于头戴式显示设备的视野内,则将第一信息组件由第二模式切换为第一模式。示例性的,若在第二指定时长内A品牌服装店的店招消失在用户视野中,则将显示区域的界面由3C切换为图3B。
在一些实施例中,第二指定时长可以小于第一指定时长。示例性的,第二指定时长可以是3s、4s或5s。其中,若在第一信息组件切换为第二模式之后的第二指定时长内,目标实体对象消失在用户视野中,则认为用户对目标实体对象的兴趣降低,可以将第一信息组件由第二模式切换回第一模式,以智能降低用户视野中的信息量。
305、若再次识别到目标实体对象,当第一信息组件再次由第一模式切换为第二模式时,显示处于第二模式的第二信息组件;其中,第一信息组件和第二信息组件不同。
可以理解的是,若目标实体对象重新处于头戴式显示设备的视野内,且第一信息组件再次由第一模式切换为第二模式时,则显示处于第二模式的第二信息组件。其中,第二信息组件的优先级等级低于第一信息组件。
示例性的,若第一信息组件是优先级等级最高的信息组件,则第二信息组件为优先级等级次高的信息组件。示例性的,若第一信息组件为承载A品牌服装店的服装信息的信息组件,则第二信息组件可以为承载A品牌服装店的导购员信息的信息组件。
示例性的,若第二信息组件为承载A品牌服装店的导购员信息的信息组 件,第二信息组件包括引导模式和展示模式。请参阅图3E和图3F,其中,图3E是显示区域显示处于引导模式的第二信息组件时的界面示意图,图3E所示的界面示意图包括图标80。图3F是显示区域显示处于展示模式的第二信息组件时的界面示意图,图3F所示的界面示意图包括卡片90和卡片100,其中,卡片90和卡片100均包括导购的照片,称呼及负责内容。
在一些实施例中,若再次识别到目标实体对象,当第一信息组件再次由第一模式切换为第二模式时,显示处于第二模式的第二信息组件,包括但不限于以下方式:
方式1、若再次识别到目标实体对象,当第一信息组件再次由第一模式切换为第二模式时,终止处于第二模式的第一信息组件在显示区域的显示,并显示处于第二模式的第二信息组件。
可以理解的是,在方式1下,显示区域的界面由图3B切换为图3F。
方式2、若再次识别到目标实体对象,当第一信息组件再次由第一模式切换为第二模式时,保持处于第二模式的第一信息组件在显示区域的显示,并显示处于第二模式的第二信息组件。
可以理解的是,在方式2下,显示区域的界面由图3B切换为图3G。其中,图3G包括处于第二模式的第一信息组件110及处于第二模式的第二信息组件120。
在一些实施例中,步骤304之后,还可以执行以下步骤:若再次识别到目标实体对象,则在显示区域显示处于第一模式的第二信息组件;其中,第一信息组件和第二信息组件不同。
在一些实施例中,步骤304之后,还可以执行以下步骤:若再次识别到目标实体对象,则在显示区域显示处于第二模式的第二信息组件;其中,第一信息组件和第二信息组件不同。
在一些实施例中,再次识别到目标实体对象,包括:若在目标实体对象消失在头戴式显示设备的视野内后的第三指定时长内,在头戴式显示设备的视野内检测到目标实体对象,则确定目标实体对象重新处于该视野内。
在本申请实施例中,在用户重新关注目标实体对象时,可以采用上述方式对该目标实体对象对应的其他信息组件进行轮询推荐,可以实现输出信息的智能更新。
通过实施上述方法,可以对目标实体对象进行识别,并将目标实体对象的第一信息组件以信息承载量小的第一模式进行显示,以及在用户触发交互操作时,将第一信息组件由第一模式切换为信息承载量较大的第二模式进行显示,这种信息量递进式的信息分步展示方式,相较于将实体对象的相关信息一次性输出来说,可以有效降低用户视野中的信息量,进而提高了信息输出的灵活性。进一步的,在用户对目标实体对象的兴趣降低的情况下,可以将第一信息组件由第二模式切换回第一模式,以智能降低用户视野中的信息量,以及在用户对 目标实体对象的兴趣重新提高的情况下,对该目标实体对象的其他信息组件进行轮询,也即,智能更新输出的关于目标实体对象的相关信息,进一步提高了信息输出的灵活性。
请参阅图4,图4是本申请实施例公开的一种头戴式显示设备的结构框图。可以包括获取单元401、显示单元402及控制单元403;其中:
获取单元401,用于识别目标实体对象,并获取目标实体对象对应的第一信息组件;
显示单元402,用于显示处于第一模式的第一信息组件;
控制单元403,用于响应于交互操作,根据该交互操作控制第一信息组件由第一模式切换为第二模式进行显示;其中,第二模式的信息承载量大于第一模式的信息承载量。
在一些实施例中,显示单元402用于显示处于第一模式的第一信息组件的方式具体可以包括:显示单元402,用于在目标实体对象的个数为多个的情况下,显示指定数量的各目标实体对象分别对应的处于第一模式的第一信息组件。
在一些实施例中,显示单元402用于显示处于第一模式的第一信息组件的方式具体可以包括:显示单元402,用于在目标实体对象的个数为多个的情况下,获取每一目标实体对象与用户之间的距离值;根据距离值,从多个目标实体对象中确定出第一实体对象;显示第一实体对象对应的处于第一模式的第一信息组件。
在一些实施例中,显示单元402,还用于显示处于第一模式的第一信息组件之后,若在第一指定时长内未检测到交互操作,则终止显示第一信息组件。
在一些实施例中,控制单元403,还用于响应于交互操作,根据交互操作控制第一信息组件由第一模式切换为第二模式进行显示之后,若在第二指定时长内未识别到目标实体对象,则将第一信息组件由第二模式切换为第一模式;和/或,若再次识别到目标实体对象,当第一信息组件再次由第一模式切换为第二模式时,显示处于第二模式的第二信息组件;其中,第一信息组件和第二信息组件不同。
在一些实施例中,第一信息组件包括多个信息组件,控制单元403,还用于响应于信息组件的选取操作,并将被选取的信息组件作为目标信息组件;
控制单元403用于响应于交互操作,根据该交互操作控制第一信息组件由第一模式切换为第二模式进行显示的方式具体可以包括:控制单元403,用于响应于交互操作,根据该交互操作控制目标信息组件由第一模式切换为第二模式进行显示。
在一些实施例中,获取单元401用于识别目标实体对象,并获取目标实体对象对应的第一信息组件的方式具体可以包括:获取单元401,用于识别用户场景并获取与用户场景匹配的场景信息,场景信息包括多个实体对象分别对应 的对象信息,以及与每个对象信息对应的至少一个信息组件;获取与场景信息中至少一个实体对象匹配的第二实体对象,确定第二实体对象为目标实体对象;根据第二实体对象对应的对象信息获取目标实体对象对应的第一信息组件。
在一些实施例中,获取单元401用于获取与用户场景匹配的场景信息的方式具体可以包括:获取单元401,用于显示与用户场景匹配的至少一个信息标识;响应于信息标识的选取操作,获取被选取的信息标识对应的场景信息。
在一些实施例中,获取单元401用于根据第二实体对象对应的对象信息获取目标实体对象对应的第一信息组件的方式具体可以包括:获取单元401,用于根据第二实体对象对应的对象信息,从场景信息中获取目标实体对象对应的至少一个信息组件;根据目标实体对象对应的至少一个信息组件中,每一信息组件的优先级等级,确定第一信息组件。
在一些实施例中,第一模式包括引导模式,第二模式为展示模式;和/或,第一模式包括展示模式,第二模式包括交互模式;
其中,处于引导模式的第一信息组件的展示样式包括引导图标,用于提示用户获取目标实体对象的相关信息;处于展示模式的标信息组件的展示样式包括卡片样式,卡片样式用于显示以下至少一种元素:图片、文本、图标、列表及宫格,用于以卡片样式向用户展示目标实体对象的相关信息;处于交互模式的第一信息组件的展示样式包括卡片样式及应用跳转控件,用于以卡片样式向用户展示目标实体对象的相关信息,应用跳转控件用于跳转进入与第一信息组件对应的应用程序。
在一些实施例中,第一模式包括展示模式,交互操作包括用户通过操作头戴式显示设备对应的控制装置触发的交互操作;和/或,通过检测用户手势触发的交互操作;
第一模式包括引导模式,交互操作包括通过检测用户视线注视时长触发的交互操作。
请参阅图5,图5是本申请实施例公开的一种头戴式显示设备的结构框图。如图5所示,头戴式显示设备可以包括处理器501、与处理器501耦合的存储器502,其中存储器502可存储有一个或多个计算机程序。
处理器501可以包括一个或者多个处理核。处理器501利用各种接口和线路连接整个终端设备内的各个部分,通过运行或执行存储在存储器502内的指令、程序、代码集或指令集,以及调用存储在存储器502内的数据,执行终端设备的各种功能和处理数据。可选地,处理器501可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器501可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等 中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器501中,单独通过一块通信芯片进行实现。
存储器502可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory,ROM)。存储器502可用于存储指令、程序、代码、代码集或指令集。存储器502可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现上述各个方法实施例的指令等。存储数据区还可以存储终端设备在使用中所创建的数据等。
在本申请实施例中,处理器501还具有以下功能:
识别目标实体对象,并获取目标实体对象对应的第一信息组件;
显示处于第一模式的第一信息组件;
响应于交互操作,根据该交互操作控制第一信息组件由第一模式切换为第二模式进行显示;其中,第二模式的信息承载量大于第一模式的信息承载量。
在本申请实施例中,处理器501还具有以下功能:
在目标实体对象的个数为多个的情况下,显示指定数量的各目标实体对象分别对应的处于第一模式的第一信息组件。
在本申请实施例中,处理器501还具有以下功能:
在目标实体对象的个数为多个的情况下,获取每一目标实体对象与用户之间的距离值;根据距离值,从多个目标实体对象中确定出第一实体对象;显示第一实体对象对应的处于第一模式的第一信息组件。
在本申请实施例中,处理器501还具有以下功能:
在显示处于第一模式的第一信息组件之后,若在第一指定时长内未检测到交互操作,则终止显示第一信息组件。
在本申请实施例中,处理器501还具有以下功能:
若在第二指定时长内未识别到目标实体对象,则将第一信息组件由第二模式切换为第一模式;和/或,若再次识别到目标实体对象,当第一信息组件再次由第一模式切换为第二模式时,显示处于第二模式的第二信息组件;其中,第一信息组件和第二信息组件不同。
在本申请实施例中,第一信息组件可以包括多个信息组件,处理器501还具有以下功能:
响应于上述信息组件的选取操作,并将被选取的信息组件作为目标信息组件;
响应于交互操作,根据该交互操作控制所述目标信息组件由第一模式切换为第二模式进行显示。
在本申请实施例中,处理器501还具有以下功能:
识别用户场景并获取与用户场景匹配的场景信息,场景信息包括多个实体对象分别对应的对象信息,以及与每个对象信息对应的至少一个信息组件;获取与场景信息中至少一个实体对象匹配的第二实体对象,确定第二实体对象为目标实体对象;根据第二实体对象对应的对象信息获取目标实体对象对应的第一信息组件。
在本申请实施例中,处理器501还具有以下功能:
显示与用户场景匹配的至少一个信息标识;
响应于信息标识的选取操作,获取被选取的信息标识对应的场景信息。
在本申请实施例中,处理器501还具有以下功能:
根据第二实体对象对应的对象信息,从场景信息中获取目标实体对象对应的至少一个信息组件;根据目标实体对象对应的至少一个信息组件中,每一信息组件的优先级等级,确定第一信息组件。
在本申请实施例中,第一模式包括引导模式,第二模式为展示模式;和/或,第一模式包括展示模式,第二模式包括交互模式;
其中,处于引导模式的第一信息组件的展示样式包括引导图标,用于提示用户获取目标实体对象的相关信息;处于展示模式的标信息组件的展示样式包括卡片样式,卡片样式用于显示以下至少一种元素:图片、文本、图标、列表及宫格,用于以卡片样式向用户展示目标实体对象的相关信息;处于交互模式的第一信息组件的展示样式包括卡片样式及应用跳转控件,用于以卡片样式向用户展示目标实体对象的相关信息,应用跳转控件用于跳转进入与第一信息组件对应的应用程序。
在本申请实施例中,第一模式包括展示模式,交互操作包括用户通过操作头戴式显示设备对应的控制装置触发的交互操作;和/或,通过检测用户手势触发的交互操作;
第一模式包括引导模式,交互操作包括通过检测用户视线注视时长触发的交互操作。
本申请实施例公开一种计算机可读存储介质,其存储计算机程序,其中,该计算机程序被处理器执行时实现如上述各实施例描述的方法。
本申请实施例公开一种计算机程序产品,该计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,且该计算机程序可被处理器执行时实现如上述各实施例描述的方法。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非易失性计算机可读取存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,该存储介质可为磁碟、光盘、ROM等。
如此处所使用的对存储器、存储、数据库或其它介质的任何引用可包括非 易失性和/或易失性存储器。合适的非易失性存储器可包括ROM、可编程ROM(Programmable ROM,PROM)、可擦除PROM(Erasable PROM,EPROM)、电可擦除PROM(Electrically Erasable PROM,EEPROM)或闪存。易失性存储器可包括随机存取存储器(random access memory,RAM),它用作外部高速缓冲存储器。作为说明而非局限,RAM可为多种形式,诸如静态RAM(Static RAM,SRAM)、动态RAM(Dynamic Random Access Memory,DRAM)、同步DRAM(synchronous DRAM,SDRAM)、双倍数据率SDRAM(Double Data Rate SDRAM,DDR SDRAM)、增强型SDRAM(Enhanced Synchronous DRAM,ESDRAM)、同步链路DRAM(Synchlink DRAM,SLDRAM)、存储器总线直接RAM(Rambus DRAM,RDRAM)及直接存储器总线动态RAM(Direct Rambus DRAM,DRDRAM)。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定特征、结构或特性可以以任意适合的方式结合在一个或多个实施例中。本领域技术人员也应该知悉,说明书中所描述的实施例均属于可选实施例,所涉及的动作和模块并不一定是本申请所必须的。
在本申请的各种实施例中,应理解,上述各过程的序号的大小并不意味着执行顺序的必然先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
在本申请各实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
上述集成的单元若以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可获取的存储器中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或者部分,可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储器中,包括若干请求用以使得一台计算机设备(可以为个人计算机、服务器或者网络设备等,具体可以是计算机设备中的处理器)执行本申请的各个实施例上述方法的部分或全部步骤。
以上对本申请实施例公开的一种信息输出方法、头戴式显示设备及可读存储介质进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。同时,对于本领域的一般技术人员,依据本申请的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请的限制。
Claims (20)
- 一种信息输出方法,其特征在于,所述方法适用于头戴式显示设备,所述方法还包括:识别目标实体对象,并获取所述目标实体对象对应的第一信息组件;显示处于第一模式的所述第一信息组件;响应于交互操作,根据所述交互操作控制所述第一信息组件由所述第一模式切换为第二模式进行显示;其中,所述第二模式的信息承载量大于所述第一模式的信息承载量。
- 根据权利要求1所述的方法,其特征在于,所述显示处于第一模式的所述第一信息组件,包括:在所述目标实体对象的个数为多个的情况下,显示指定数量的各所述目标实体对象分别对应的处于第一模式的所述第一信息组件。
- 根据权利要求1所述的方法,其特征在于,所述显示处于第一模式的所述第一信息组件,包括:在所述目标实体对象的个数为多个的情况下,获取每一所述目标实体对象与所述头戴式显示设备之间的距离值;根据所述距离值,从多个所述目标实体对象中确定出第一实体对象;显示所述第一实体对象对应的处于第一模式的所述第一信息组件。
- 根据权利要求1所述的方法,其特征在于,所述显示处于第一模式的所述第一信息组件,包括:在所述头戴式显示设备的显示区域的指定位置,显示处于第一模式的所述第一信息组件。
- 根据权利要求1所述的方法,其特征在于,所述显示处于第一模式的所述第一信息组件,包括:获取所述目标实体对象与所述头戴式显示设备的相对位姿信息;根据所述相对位姿信息,确定所述目标实体对象在所述头戴式显示设备的显示区域上的映射位置;根据所述映射位置及预先设定的所述第一信息组件的叠加显示信息,确定所述第一信息组件对应的显示位置;在所述第一信息组件对应的显示位置上显示处于第一模式的所述第一信息组件;其中,所述叠加显示信息用于指示所述第一信息组件对应的显示位置与所述映射位置的相对位置关系。
- 根据权利要求1-5任一项所述的方法,其特征在于,所述显示处于第一模式的所述第一信息组件之后,所述方法还包括:若在第一指定时长内未检测到交互操作,则终止显示所述第一信息组件。
- 根据权利要求1-5任一项所述的方法,其特征在于,所述响应于交互 操作,根据所述交互操作控制所述第一信息组件由所述第一模式切换为第二模式进行显示之后,所述方法还包括:若在第二指定时长内未识别到所述目标实体对象,则将所述第一信息组件由所述第二模式切换为所述第一模式;和/或若再次识别到所述目标实体对象,当所述第一信息组件再次由所述第一模式切换为所述第二模式时,显示处于所述第二模式的第二信息组件;其中,所述第一信息组件和所述第二信息组件不同。
- 根据权利要求7所述的方法,其特征在于,所述当所述第一信息组件再次由所述第一模式切换为所述第二模式时,显示处于所述第二模式的第二信息组件,包括:当所述第一信息组件再次由所述第一模式切换为所述第二模式时,终止处于所述第二模式的第一信息组件在显示区域的显示,并显示处于所述第二模式的第二信息组件。
- 根据权利要求7所述的方法,其特征在于,所述当所述第一信息组件再次由所述第一模式切换为所述第二模式时,显示处于所述第二模式的第二信息组件,包括:当所述第一信息组件再次由所述第一模式切换为所述第二模式时,保持处于所述第二模式的第一信息组件在显示区域的显示,并显示处于所述第二模式的第二信息组件。
- 根据权利要求1-5任一项所述的方法,其特征在于,所述响应于交互操作,根据所述交互操作控制所述第一信息组件由所述第一模式切换为第二模式进行显示之后,所述方法还包括:若在第二指定时长内未识别到所述目标实体对象,则将所述第一信息组件由所述第二模式切换为所述第一模式;若再次识别到所述目标实体对象,则在显示区域显示处于所述第二模式的第二信息组件;其中,所述第一信息组件和所述第二信息组件不同。
- 根据权利要求1所述的方法,其特征在于,所述第一信息组件包括多个信息组件,所述方法还包括:响应于所述信息组件的选取操作,并将被选取的信息组件作为目标信息组件;其中,所述响应于交互操作,根据所述交互操作控制所述第一信息组件由所述第一模式切换为第二模式进行显示,包括:响应于交互操作,根据所述交互操作控制所述目标信息组件由所述第一模式切换为第二模式进行显示。
- 根据权利要求1所述的方法,其特征在于,所述识别目标实体对象,并获取所述目标实体对象对应的第一信息组件,包括:识别用户场景并获取与所述用户场景匹配的场景信息,所述场景信息包括 多个实体对象分别对应的对象信息,以及与每个所述对象信息对应的至少一个信息组件;获取与所述场景信息中至少一个所述实体对象匹配的第二实体对象,确定所述第二实体对象为目标实体对象;根据所述第二实体对象对应的对象信息获取所述目标实体对象对应的第一信息组件。
- 根据权利要求12所述的方法,其特征在于,所述获取与所述用户场景匹配的场景信息,包括:显示与所述用户场景匹配的至少一个信息标识;响应于所述信息标识的选取操作,获取被选取的信息标识对应的场景信息。
- 根据权利要求12所述的方法,其特征在于,所述根据所述第二实体对象对应的对象信息获取所述目标实体对象对应的第一信息组件,包括:根据所述第二实体对象对应的对象信息,从所述场景信息中获取所述目标实体对象对应的至少一个信息组件;根据所述目标实体对象对应的至少一个信息组件中,每一信息组件的优先级等级,确定第一信息组件。
- 根据权利要求1所述的方法,其特征在于,所述第一模式包括引导模式,所述第二模式为展示模式;和/或,所述第一模式包括展示模式,所述第二模式包括交互模式;其中,处于所述引导模式的所述第一信息组件的展示样式包括引导图标,用于提示用户获取所述目标实体对象的相关信息;和/或处于所述展示模式的所述第一信息组件的展示样式包括卡片样式,所述卡片样式用于显示以下至少一种元素:图片、文本、图标、列表及宫格,用于以所述卡片样式向用户展示所述目标实体对象的相关信息;和/或处于所述交互模式的所述第一信息组件的展示样式包括所述卡片样式及应用跳转控件,用于以所述卡片样式向用户展示所述目标实体对象的相关信息,所述应用跳转控件用于跳转进入与所述第一信息组件对应的应用程序。
- 根据权利要求15所述的方法,其特征在于,所述第一模式包括所述展示模式,所述交互操作包括用户通过操作所述头戴式显示设备对应的控制装置触发的交互操作;和/或,通过检测用户手势触发的交互操作;所述第一模式包括所述引导模式,所述交互操作包括通过检测用户视线注视时长触发的交互操作。
- 根据权利要求1所述的方法,其特征在于,所述识别目标实体对象,包括:通过所述头戴式显示设备的图像传感器,采集所述头戴式显示设备的视野内的实体对象的图像画面;根据所述图像画面识别目标实体对象;其中,所述图像画面包括二维画面和/或三维画面。
- 一种头戴式显示设备,其特征在于,包括:获取单元,用于识别目标实体对象,并获取所述目标实体对象对应的第一信息组件;显示单元,用于显示处于第一模式的所述第一信息组件;控制单元,用于响应于交互操作,根据所述交互操作控制所述第一信息组件由所述第一模式切换为第二模式进行显示;其中,所述第二模式的信息承载量大于所述第一模式的信息承载量。
- 一种头戴式显示设备,其特征在于,包括:存储有可执行程序代码的存储器;以及所述存储器耦合的处理器;所述处理器调用所述存储器中存储的所述可执行程序代码,所述可执行程序代码被所述处理器执行时,使得所述处理器实现如权利要求1-17中任一所述的方法。
- 一种计算机可读存储介质,其上存储有可执行程序代码,其特征在于,所述可执行程序代码被处理器执行时,实现如权利要求1-17中任一所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111394126.8 | 2021-11-23 | ||
CN202111394126.8A CN114063785A (zh) | 2021-11-23 | 2021-11-23 | 信息输出方法、头戴式显示设备及可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023093329A1 true WO2023093329A1 (zh) | 2023-06-01 |
Family
ID=80279438
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2022/124410 WO2023093329A1 (zh) | 2021-11-23 | 2022-10-10 | 信息输出方法、头戴式显示设备及可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114063785A (zh) |
WO (1) | WO2023093329A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114063785A (zh) * | 2021-11-23 | 2022-02-18 | Oppo广东移动通信有限公司 | 信息输出方法、头戴式显示设备及可读存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150143423A1 (en) * | 2013-11-19 | 2015-05-21 | Humax Co., Ltd. | Apparatus, method, and system for controlling device based on user interface that reflects user's intention |
CN111142673A (zh) * | 2019-12-31 | 2020-05-12 | 维沃移动通信有限公司 | 场景切换方法及头戴式电子设备 |
CN112684893A (zh) * | 2020-12-31 | 2021-04-20 | 上海电气集团股份有限公司 | 信息展示方法、装置、电子设备及存储介质 |
CN114063785A (zh) * | 2021-11-23 | 2022-02-18 | Oppo广东移动通信有限公司 | 信息输出方法、头戴式显示设备及可读存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9096920B1 (en) * | 2012-03-22 | 2015-08-04 | Google Inc. | User interface method |
CN108665553B (zh) * | 2018-04-28 | 2023-03-17 | 腾讯科技(深圳)有限公司 | 一种实现虚拟场景转换的方法及设备 |
CN109949121A (zh) * | 2019-01-21 | 2019-06-28 | 广东康云科技有限公司 | 一种智能看车的数据处理方法及系统 |
CN113419800B (zh) * | 2021-06-11 | 2023-03-24 | 北京字跳网络技术有限公司 | 交互方法、装置、介质和电子设备 |
-
2021
- 2021-11-23 CN CN202111394126.8A patent/CN114063785A/zh active Pending
-
2022
- 2022-10-10 WO PCT/CN2022/124410 patent/WO2023093329A1/zh unknown
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150143423A1 (en) * | 2013-11-19 | 2015-05-21 | Humax Co., Ltd. | Apparatus, method, and system for controlling device based on user interface that reflects user's intention |
CN111142673A (zh) * | 2019-12-31 | 2020-05-12 | 维沃移动通信有限公司 | 场景切换方法及头戴式电子设备 |
CN112684893A (zh) * | 2020-12-31 | 2021-04-20 | 上海电气集团股份有限公司 | 信息展示方法、装置、电子设备及存储介质 |
CN114063785A (zh) * | 2021-11-23 | 2022-02-18 | Oppo广东移动通信有限公司 | 信息输出方法、头戴式显示设备及可读存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN114063785A (zh) | 2022-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11140324B2 (en) | Method of displaying wide-angle image, image display system, and information processing apparatus | |
TW202119362A (zh) | 一種擴增實境資料呈現方法、電子設備及儲存介質 | |
US9348411B2 (en) | Object display with visual verisimilitude | |
KR102282316B1 (ko) | 혼합 현실의 등급을 매긴 정보 전달 | |
US11854148B2 (en) | Virtual content display opportunity in mixed reality | |
US10438262B1 (en) | Method and device for implementing a virtual browsing experience | |
US10043317B2 (en) | Virtual trial of products and appearance guidance in display device | |
JP7224488B2 (ja) | インタラクティブ方法、装置、デバイス、及び記憶媒体 | |
JP2018163460A (ja) | 情報処理装置、および情報処理方法、並びにプログラム | |
JP6720385B1 (ja) | プログラム、情報処理方法、及び情報処理端末 | |
US10037077B2 (en) | Systems and methods of generating augmented reality experiences | |
US9990665B1 (en) | Interfaces for item search | |
WO2019135313A1 (ja) | 情報処理装置、情報処理方法及びプログラム | |
CN109582134B (zh) | 信息显示的方法、装置及显示设备 | |
JP5600148B2 (ja) | 映像配信装置、映像配信方法及び映像配信プログラム | |
US11270367B2 (en) | Product comparison techniques using augmented reality | |
CN108572772A (zh) | 图片内容呈现方法及装置 | |
WO2023093329A1 (zh) | 信息输出方法、头戴式显示设备及可读存储介质 | |
WO2015195413A1 (en) | Systems and methods for presenting information associated with a three-dimensional location on a two-dimensional display | |
CN112987914B (zh) | 用于内容放置的方法和设备 | |
US20170083952A1 (en) | System and method of markerless injection of 3d ads in ar and user interaction | |
JP5934425B2 (ja) | 多様な環境における構造化照明ベースのコンテンツ対話 | |
JP2020013552A (ja) | 端末装置、閲覧システム、表示方法、及びプログラム | |
US20240185530A1 (en) | Information interaction method, computer-readable storage medium and communication terminal | |
JP2020126368A (ja) | プログラム、閲覧システム、端末装置、表示方法、情報処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22897414 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |