CN114296551A - Target object presenting method and device, electronic equipment and storage medium - Google Patents

Target object presenting method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114296551A
CN114296551A CN202111626260.6A CN202111626260A CN114296551A CN 114296551 A CN114296551 A CN 114296551A CN 202111626260 A CN202111626260 A CN 202111626260A CN 114296551 A CN114296551 A CN 114296551A
Authority
CN
China
Prior art keywords
target object
scene
user
target
condition
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111626260.6A
Other languages
Chinese (zh)
Inventor
唐荣兴
金丽云
杨哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiscene Information Technology Co Ltd
Original Assignee
Hiscene Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiscene Information Technology Co Ltd filed Critical Hiscene Information Technology Co Ltd
Priority to CN202111626260.6A priority Critical patent/CN114296551A/en
Publication of CN114296551A publication Critical patent/CN114296551A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a target object presenting method, a target object presenting device, electronic equipment and a storage medium, wherein the target object presenting method comprises the following steps: acquiring a scene in a user field of view; detecting whether a condition for changing the presentation state of a target object in the scene is met in real time; and outputting a response result of the target object when a condition for changing the presentation state of the target object in the scene is detected to be satisfied. The scheme can provide a friendly augmented reality interaction mode to enhance the immersive experience of the user, the whole interaction process is simple and rapid, various modes can be provided for the presentation of the target object, and the color specificity of the related product is further improved.

Description

Target object presenting method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for presenting a target object, an electronic device, and a storage medium.
Background
The Augmented Reality (AR) technology is a technology that skillfully fuses virtual information and the real world, and a plurality of technical means such as multimedia, three-dimensional modeling, real-time tracking and registration, intelligent interaction, sensing and the like are widely applied, and virtual information such as characters, images, three-dimensional models, music, videos and the like generated by a computer is applied to the real world after being simulated, and the two kinds of information complement each other, so that the real world is enhanced. The AR augmented reality technology is relatively new technical content which promotes integration of real world information and virtual world information, and the AR augmented reality technology implements analog simulation processing on the basis of computer and other scientific technologies of entity information which is difficult to experience in the space range of the real world originally, overlaps virtual information content in the real world for effective application, and can be perceived by human senses in the process, so that the sensory experience beyond reality is realized.
In the augmented reality process, how to achieve diversified presentation of a specific target object poses a new challenge.
Disclosure of Invention
An object of the present application is to provide a method and an apparatus for presenting a target object, an electronic device, and a storage medium, which aim to provide a friendly augmented reality interaction manner or change an existing manner to enhance an immersive experience of a user, and the whole interaction process is simple and fast, and can provide multiple manners for presenting the target object, thereby further improving the color specificity of a related product.
According to a first aspect of the present application, an embodiment of the present application provides a method for presenting a target object, including:
acquiring a scene in a user field of view;
detecting whether a condition for changing the presentation state of a target object in the scene is met in real time; and
and when the condition of changing the presentation state of the target object in the scene is detected to be met, outputting a response result of the target object.
According to a second aspect of the present application, an embodiment of the present application provides a target object rendering apparatus, which includes:
the scene acquisition module is used for acquiring a scene in a user field of view;
the condition detection module is used for detecting whether a condition for changing the presentation state of the target object in the scene is met in real time; and
and the result output module is used for outputting the response result of the target object when detecting that the condition for changing the presentation state of the target object in the scene is met.
According to a third aspect of the present application, an embodiment of the present application provides an electronic device, which includes a memory and a processor, where the memory stores a computer program, and the processor executes the method for presenting a target object according to any embodiment of the present application by calling the computer program stored in the memory.
According to a fourth aspect of the present application, an embodiment of the present application provides a storage medium storing a computer program, where the computer program is suitable for being loaded by a processor to execute the method for presenting a target object according to any embodiment of the present application.
The target object presentation method, the target object presentation device, the electronic equipment and the storage medium aim to provide a friendly augmented reality interaction mode or change the existing mode to enhance the immersive experience of a user, the whole interaction process is simple and rapid, and multiple modes can be provided for the presentation of the target object, so that the color characteristics of related products are further improved.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating steps of a method for presenting a target object according to an embodiment of the present disclosure.
FIG. 2 is a diagram of an example of user interaction.
FIG. 3 is another exemplary diagram of user interaction.
FIG. 4 is another exemplary diagram of user interaction.
FIG. 5 is a diagram of yet another example of user interaction.
Fig. 6 is a schematic structural diagram of a target object presenting apparatus according to an embodiment of the present application.
Fig. 7 is an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In a typical configuration of the present application, the devices of the terminal and the service network each include one or more processors (e.g., Central Processing Units (CPUs)), input/output interfaces, network interfaces, and memory.
The Memory may include forms of volatile Memory, Random Access Memory (RAM), and/or non-volatile Memory in a computer-readable medium, such as Read Only Memory (ROM) or Flash Memory. Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, Phase-Change Memory (PCM), Programmable Random Access Memory (PRAM), Static Random-Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), electrically Erasable Programmable Read-Only Memory (EEPROM), flash Memory or other Memory technology, Compact Disc Read-Only Memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device.
The electronic device referred to in this application includes, but is not limited to, a user device, a network device, or a device formed by integrating a user device and a network device through a network. The user equipment includes, but is not limited to, any mobile electronic product capable of performing human-computer interaction with a user, such as a smart phone, a tablet computer, smart glasses, a head-mounted device, and the like, and the mobile electronic product may adopt any operating system, such as an Android operating system, an iOS operating system, and the like. The network Device includes an electronic Device capable of automatically performing numerical calculation and information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded Device, and the like. The network device includes but is not limited to a computer, a network host, a single network server, a plurality of network server sets or a cloud of a plurality of servers; here, the Cloud is composed of a large number of computers or web servers based on Cloud Computing (Cloud Computing), which is a kind of distributed Computing, one virtual supercomputer consisting of a collection of loosely coupled computers. Including, but not limited to, the internet, a wide area network, a metropolitan area network, a local area network, a VPN network, a wireless Ad Hoc network (Ad Hoc network), etc. Preferably, the device may also be a program running on the user device, the network device, or a device formed by integrating the user device and the network device, the touch terminal, or the network device and the touch terminal through a network.
Of course, those skilled in the art will appreciate that the foregoing is by way of example only, and that other existing or future devices, which may be suitable for use in the present application, are also encompassed within the scope of the present application and are hereby incorporated by reference.
In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
Fig. 1 is a flowchart illustrating a method for presenting a target object according to an embodiment of the present application.
As shown in fig. 1, the method for presenting the target object includes: step S100, acquiring a scene in a user field of view; step S200, detecting whether a condition for changing the presenting state of a target object in the scene is met in real time; step S300, when the condition of changing the presenting state of the target object in the scene is detected to be satisfied, outputting the response result of the target object.
By executing the steps S100 to S300, a friendly augmented reality interaction mode can be implemented to enhance the immersive experience of the user, the whole interaction process is simple and fast, and multiple modes can be provided for the presentation of the target object, thereby further improving the color specificity of the related product.
It should be noted that the presentation method may be applied to an electronic device, for example, a user equipment, and the user equipment includes but is not limited to a human-computer interaction device having an image pickup device and a display device (e.g., a display screen and a projector), a head-mounted device such as smart glasses and smart helmets, a smart phone, a personal computer, a tablet computer, a projection device, and the like, where the user equipment may be a built-in image pickup device and a built-in display device, or an external image pickup device and a display device, and is not limited herein. In some embodiments, the user equipment may also include detection devices (e.g., various sensors), and the like.
The presentation method of the target object will be further described below with reference to the drawings of the specification.
Specifically, in step S100, a scene in the user' S field of view is acquired. It should be noted that the scene in the user field of view is the real scene seen by the user. The scenes may include, but are not limited to, work scenes, life scenes, indoor and outdoor scenes, entertainment scenes, and the like.
Herein, a target object includes a target menu bar (or tag library). The content of the target menu bar includes one or more marking information (or called virtual controls, or called labels, etc.). In other words, the target menu bar may also be referred to as a collection of markup information. The tagged information includes objects that may be added to the scene that are not actually present in the scene. Further, the mark information can be displayed in the current scene by the current user or other users in an overlapping manner.
Further, the marking information may include at least: identification information, file information, form information, application calling information and real-time sensing information. For example, the marking information may include identification information such as arrows, brushes, arbitrary graffiti on the screen, circles, geometric shapes, and the like. For another example, the tag information may further include corresponding multimedia file information, such as various files like pictures, videos, 3D models, PDF files, office documents, and the like. For another example, the markup information may also include form information, such as generating a form at the corresponding target location for a user to view or input content, and the like. For another example, the tag information may further include application calling information, related instructions for executing the application, and the like, such as opening the application, calling a specific function of the application, specifically, making a phone call, opening a link, and the like. For another example, the tag information may also include real-time sensing information for connecting a sensing device (e.g., a sensor) and acquiring sensing data of the target object. In some embodiments, the tagging information includes any one of: a marker identifier, such as a marked icon, name, etc.; markup content, such as the content of a PDF file, the color, size, etc. of the markup; and marking types such as file information, application calling information and the like. Of course, those skilled in the art will appreciate that the above-described tagging information is merely exemplary, and that other existing or future tagging information, as may be applicable to the present application, is also intended to be encompassed within the scope of the present application and is hereby incorporated by reference.
As described above, since the target menu bar is referred to as a set of markup information (or virtual controls, or tabs), the target menu bar can be used as a container of markup information (or virtual controls, or tabs) for displaying a plurality of markup information for a user to quickly select desired markup information, etc.
In step S200, it is detected in real time whether a condition for changing the presentation state of the target object in the scene is satisfied.
Herein, the condition for changing the presentation state of the target object in the scene includes, but is not limited to, a click instruction, a touch instruction, a gesture instruction, a voice instruction, a key instruction, a head movement instruction, an eye movement instruction, and other interaction instructions. Further, interaction of various parts of the body, such as shoulder movement or leg movement, may also be included. In some embodiments, the condition for changing the presentation state may be an interaction instruction corresponding to one interaction type, such as a gesture instruction, and in other embodiments, the condition for changing the presentation state may be a combination of interaction instructions corresponding to multiple interaction types, such as a combination of a gesture instruction and a head movement instruction, and the like, which are not limited herein. Accordingly, the manner of detecting whether the condition of changing the presentation state of the target object in the scene is satisfied in real time corresponds to a method including but not limited to detecting interactive instructions such as a click instruction, a touch instruction, a gesture instruction, a voice instruction, a key instruction, a head movement instruction, an eyeball movement instruction, and the like, for example, the gaze direction and the gaze duration of the eyeball of the user can be detected through a sensor, such as an eye tracker, a camera, an infrared device, and the like; for another example, a sensor may detect a particular gesture of a user, such as a camera, a depth camera, a data glove, an optical marker, etc.; for another example, the voice of the user can be detected by the voice collector and a voice instruction, such as a microphone, is determined by a voice recognition algorithm; as another example, and head movements of the user may be detected by a head pose sensor, such as a magnetic sensor, an inertial sensor, an optical motion capture system, and the like. The sensors or devices may be built in or externally connected to the user equipment, and use their built-in software and/or hardware modules to recognize and determine the input command for the target object.
In step S300, the outputting the response result of the target object includes any one of: displaying an icon corresponding to the target object; expanding the target object; closing the target object; hiding the target object.
Here, the icon corresponding to the target object refers to an icon corresponding to a target menu bar. Displaying the icon corresponding to the target object includes displaying the icon corresponding to the target object in the current scene, for example, the current scene does not display the target menu bar, and through user interaction, the icon corresponding to the target menu bar is displayed in the current scene; the expanding the target object includes displaying the expanded target object in the current scene, so that all or part of the mark information in the target object is displayed in the current scene, for example, the target menu bar is not displayed in the current scene or the mark information corresponding to the target menu bar is displayed in the current scene through a user interaction action, and for example, an icon corresponding to the target menu bar is displayed in the current scene, and the target menu bar is expanded so that all or part of the mark information in the target menu bar is displayed in the current scene through the user interaction action; closing the target object comprises closing the expanded target object, so that all or part of the mark information displayed in the current scene is not displayed, for example, all or part of the mark information of the target menu bar is displayed in the current scene, only the icon of the target menu bar is displayed through user interaction, and the mark information in the target menu bar is not displayed; the hiding of the target object includes that the target object or mark information corresponding to the target object is not displayed in the current scene, so that all or part of mark information displayed in the current scene or the displayed target object is not displayed, for example, all or part of mark information of the target menu bar is displayed in the current scene, or an icon of the target menu bar is displayed in the current scene, and the mark information of the target menu bar or the icon of the target menu bar is not displayed in the current scene through user interaction.
Further, when an interactive action (for example, a specific gesture action such as clicking, pinching, or frame selection, or a voice command, an eye movement action, a head movement action, a touch action, a key action, or the like) is performed on an icon corresponding to the displayed target menu bar, the target menu bar may be expanded. The target menu bar may be expanded in the horizontal direction of the current scene or may be expanded in the vertical direction of the current scene. Wherein, the icon corresponding to the target menu bar or the display position of the expanded target menu bar can be preset or can be changed in real time, for example, a user wears intelligent glasses, checking the current scene through the intelligent glasses, fixedly displaying an icon corresponding to the target menu bar or an unfolded target menu bar at a preset position in a display screen of the intelligent glasses, such as being fixedly displayed at the edge of the screen, etc., and not changing with the movement of the user, or, for example, the smart glasses recognize the specific mark in the current scene, according to the position of the mark in the display screen of the intelligent glasses, an icon corresponding to the target menu bar or the expanded target menu bar is displayed in an overlapping mode, when a user moves in a scene, the position of the mark in the intelligent glasses screen is changed in real time, so that the icon corresponding to the target menu bar or the overlapped position of the expanded target menu bar in the display screen is changed.
It should be noted that, after the icon corresponding to the target menu bar is displayed in the user scene, the icon corresponding to the target menu bar may be further operated, and then the target menu bar may be expanded along the horizontal direction or the vertical direction of the current scene, for example, the horizontal direction or the vertical direction of the current scene viewed by the user. After the target menu bar is expanded, the markup information (or virtual controls) located within the target menu bar is displayed. Of course, the target menu bar may be directly expanded in the horizontal direction or the vertical direction of the current scene without displaying the icon corresponding to the target menu bar in advance. In some embodiments, when it is detected that a condition for changing a presentation state of a target object in the scene is satisfied, the outputting of the response result of the target object is to display an icon corresponding to the target object or to expand the target object; wherein any one of the above is: recognizing a preset recognition object; making a sound-making finger action; throwing the hands into the air and then dropping the hands; opening the palm; the palms of the hands slide in an arc outward direction; the hands slide outwards; and selecting the virtual handle in the scene to do the action of drawing the drawer.
Specifically, in an embodiment, when a preset recognition object is recognized, the response result of the target object is output as displaying an icon corresponding to the target object or expanding the target object. Before the preset identification object is identified, the following steps are executed: making the identification object; configuring the recognition object to associate with the target object; arranging the recognition object in the scene.
Wherein the identification object may include two-dimensional identification information, and further, the two-dimensional identification information includes any one of: two-dimensional code information; the 2D identification map information. For example, the two-dimensional identification information includes two-dimensional code information, which is a pattern that records Data symbol information and is distributed in a plane (two-dimensional direction) at regular intervals using a certain geometric pattern, such as an amuco code, a QR code, and a Data Matrix code. The two-dimensional identification information may also include 2D identification map information, and the 2D identification map information includes a two-dimensional pattern for indicating specific position information, easy identification, and the like, for example, a two-dimensional pattern including a specific geometric shape, such as an image including a specific shape such as a white solid dot, a rectangle, and the like in a solid black image, as the 2D identification map, or other identification patterns, such as a captured two-dimensional image as the 2D identification map, and the like. In this embodiment, the two-dimensional identification information in the identification object may be one or more of the two-dimensional code information, one or more of the 2D identification map information, or a combination of the two-dimensional code information and the 2D identification map information, and the like, which is not limited herein. Therefore, in some embodiments, the identification object may be an identification image including two-dimensional identification information, and further, the identification object may be a sheet of paper on which the identification image is printed. In some embodiments, the identification object includes an identification sticker containing two-dimensional identification information, and when the identification sticker is manufactured, an identification image containing the two-dimensional identification information may be printed to manufacture the identification sticker, wherein the identification sticker includes a magnet type identification sticker or an adhesive-backed type identification sticker. In addition, the identification object is manufactured, and meanwhile, the association information of the target object is placed in the identification object, and the identification object is bound with the target object. After the identification object is made and configured, the user may place the identification object in the current scene, such as by attaching the identification object to a location in the scene. When the user scans and identifies the identification object by using the user equipment (such as a head-mounted device like intelligent glasses and intelligent helmets), the user equipment automatically displays an icon corresponding to the target object related to the identification object or an expanded target object. The icons or expanded target objects corresponding to these target objects (e.g., target menu bars) associated with the identification object further include displaying the target objects superimposed on the surface of the identification object or positioned around the identification object. Specifically, for example, icons corresponding to the target menu bar are displayed superimposed on the surface of the magnet-type identification sticker, and for example, the target menu bar in an expanded shape is displayed around the adhesive-backed identification sticker, which is not listed here.
Further, the recognition object may be movable and may be moved from one position to another position of the spatial scene, for example, a magnet-type recognition sticker made by printing a recognition image containing two-dimensional recognition information may be arbitrarily attached to a magnetic object by its magnetism, and thus may also be moved from one magnetic object to another magnetic object. Of course, the identification object may also be fixed, such as an adhesive-backed identification sticker, which is not easily moved after being adhered to the target location.
In some embodiments, the identified objects are one or more. Further, in the current scene, a plurality of recognition objects may be arranged. Each recognition object may be associated with its respective target object (e.g., a target menu bar). The user can scan different identification objects through the user equipment, so that corresponding target objects can be obtained. For example, some target objects may be used to provide various multimedia controls, such as videos, PDF files, WORD files, and so forth; some target objects may be used to provide various application controls, such as a text messaging application, a telephony application, a web browsing application, and so forth; some target objects may be used to provide various real-time sensing controls, such as temperature information, humidity information, and the like. Of course, the objects and associated target objects may be identified according to different scene arrangements. For example, in a work scenario, a sticker is placed on a desk, when a user scans the sticker using a smart helmet, a target menu bar associated with the sticker is horizontally expanded on the desk, and various multimedia controls located in the target menu bar are displayed for the user to use for media authoring. For another example, in an indoor scene, an adhesive-backed identification sticker is adhered to a wall, when a user scans the identification sticker using smart glasses, a target menu bar associated with the identification sticker is vertically spread on the wall, and various real-time sensing controls (e.g., a temperature sensor, a humidity sensor) located in the target menu bar are displayed for the user to know indoor temperature or humidity in time. For example, in a life scene, a magnet-type identification sticker is magnetically attached to a refrigerator, and when a user uses the identification sticker with another smart device, an icon corresponding to a target menu bar associated with the identification sticker is displayed on the refrigerator in an overlapping manner. If the user interactively operates the icon corresponding to the target menu bar through a specific gesture, the target menu bar can be unfolded on the refrigerator, and various marking information in the target menu bar, such as an arrow, a painting brush, geometric shapes and the like, is displayed for the user to add various marking information on the refrigerator, and the added marking information can be used as reference information of other users.
Further, the identification object has a built-in chip or sensor for storing spatial Information of the identification object in real time, such as GIS (Geographic Information System) Information. Based on the GIS information, the content can be bound with other target menu bars or other mark information in the same or different scenes. For example, based on GIS information and an associated target menu bar, the tagged information in the target menu bar may be interacted with to display temperature or humidity data for the current location, to play news stories for the current location, to display life service information (e.g., restaurant information, theater information, medical clinic information, etc.) for the current location, and so forth. By the design, reasonable association of the identification sticker and the target object can be realized, so that the characteristics of related products are enhanced, and the viscosity of users can be improved.
Further, the related information of the recognition object may be synchronized with the cloud, and in some embodiments, the target object associated with the recognition object, the icon of the target object, or the mark information in the target object may be updated synchronously based on the cloud information, so as to be suitable for different scenarios. In other embodiments, the information related to the target object associated with the identification object may be synchronized to other user devices, so that other users may scan the identification object through the corresponding user devices (e.g., smart glasses device or other mobile devices) and view the information related to the target object associated with the identification object, such as an icon of the target object or an expanded target object. Moreover, the data of the identification object can be ensured to be safe and reliable through synchronous management of the cloud.
In an embodiment, when it is detected that the condition for changing the presentation state of the target object in the scene is satisfied includes a user making a finger-ringing action, outputting the response result of the target object as: displaying the icon corresponding to the target object or expanding the target object further includes displaying that the target object is located on a finger of a user, as shown in fig. 2. In this embodiment, when it is detected by the user device (e.g., the image capturing device and/or the detection device) that the condition for changing the presentation state of the target object in the scene includes a user making a finger-ringing action, the response result of the target menu bar may be output by displaying an icon corresponding to the target object on the finger of the user, or by displaying the expanded target object on the finger of the user, that is, by displaying the mark information in the target object on the finger of the user. Further, when the icon corresponding to the target object is displayed on the finger of the user, the subsequent user may expand the target object through an interactive action (e.g., a specific gesture such as clicking, pinching, or frame-selecting, and further, for example, a voice command, an eye movement action, a head movement action, a touch action, etc.), and display the mark information in the target object. Further, when the mark information in the target object is displayed on the finger of the user, the subsequent user may superimpose the mark information in the target object on the scene through an interactive action, for example, when the mark information in the target menu bar is displayed on the finger of the user, the subsequent user selects a certain mark information in the target menu bar through an interactive action such as pinching or holding after opening the multiple fingers, and moves the hand to the target position. When the user moves to the target position and looses his hand, the adsorption point of the mark information is adsorbed at the position where the point cloud corresponding to the target position is located, that is, the mark information is adsorbed at the target position. After the adsorption is successful, the corresponding point cloud or the corresponding point cloud plane in the scene automatically disappears, so that the interactive experience of the user can be improved. Further, after the markup information is superimposed on the scene, a subsequent user may edit the markup information added in the scene through an interactive action, including but not limited to moving, zooming in, zooming out, deleting, copying, rotating the markup information, and the like, for example, the user may perform an interactive action on the markup information displayed in the scene through a specific gesture, for example, after the user selects the markup information in the scene through pinching, gazing, or clicking, at least two fingers (e.g., three fingers) of the user are opened, and the markup information is displayed to be zoomed in; the user can pinch at least two fingers (such as three fingers) to display the marked information and display the marked information to be reduced, so that the purpose of editing the marked information by the user can be realized. Therefore, the target object presentation method described in this embodiment changes the presentation method of the existing target object, and also enhances the interaction function of the mark information in the target object, thereby improving the characteristics of the related products.
In an embodiment, when it is detected that the condition for changing the presentation state of the target object in the scene is satisfied includes that the hand is thrown into the air and falls down again, outputting the response result of the target object as displaying an icon corresponding to the target object or expanding the target object further includes displaying that the target object is located in the hand of the user. In this embodiment, when it is detected by the user equipment (e.g., the image capturing device and/or the detection device) that the condition for changing the presentation state of the target object in the scene includes that the hand is thrown into the air and then falls down, the response result of outputting the target object may be that the icon corresponding to the display target object is located in the hand of the user, or may be that the display expanded target object is located in the hand of the user, that is, the mark information in the display target object is located in the hand of the user. Further, when the icon corresponding to the target object is displayed in the hand of the user, the subsequent user can expand the target object through interactive action, and mark information in the target object is displayed. Further, when the marker information in the target object is displayed in the hand of the user, the subsequent user can superimpose the marker information in the target object into the scene through an interactive action. Further, after the mark information is superimposed on the scene, the subsequent user can edit the mark information added in the scene through interactive actions, including but not limited to moving, zooming in, zooming out, deleting, copying, rotating the mark information, and the like. Therefore, the target object presentation method described in this embodiment changes the presentation method of the existing target object, and also enhances the interaction function of the mark information in the target object, thereby improving the characteristics of the related products.
In an embodiment, when it is detected that the condition for changing the presentation state of the target object in the scene is met includes opening a palm, outputting the response result of the target object as displaying an icon corresponding to the target object or expanding the target object further includes displaying that the target object is located on a hand of a user or on an open finger. In this embodiment, when it is detected by the user device (e.g., the image capturing device and/or the detection device) that the condition for changing the presentation state of the target object in the scene includes opening the palm, outputting the response result of the target object may be displaying that the icon corresponding to the target object is located on the hand or the open finger of the user, or displaying that the expanded target object is located on the hand or the open finger of the user, that is, displaying that the mark information in the target object is located on the hand or the open finger of the user. Further, when the icon corresponding to the target object is displayed on the hand or the open finger of the user, the subsequent user can expand the target object through interactive action, and mark information in the target object is displayed. Further, when the marker information in the target object is displayed on the hand or on the open finger of the user, the subsequent user can superimpose the marker information in the target object into the scene through interactive action. Further, after the mark information is superimposed on the scene, the subsequent user can edit the mark information added in the scene through interactive actions, including but not limited to moving, zooming in, zooming out, deleting, copying, rotating the mark information, and the like. The presentation mode of the target object designed in the way can avoid the stiff and uninteresting feeling of the existing menu bar presentation mode.
In an embodiment, when it is detected that the condition for changing the presentation state of the target object in the scene includes a palm-out arc sliding (as shown in fig. 3 and 4) or a hand sliding outward, outputting the response result of the target object as displaying the icon corresponding to the target object or expanding the target object further includes the target object being expanded laterally along a horizontal direction of the scene. In this embodiment, when it is detected by the user device (e.g., the image capturing device and/or the detection device) that the condition for changing the presentation state of the target object in the scene includes arc sliding of the palms of the hands in an outward direction or outward sliding of the hands, outputting the response result of the target object is that the target object is spread out laterally in the horizontal direction of the scene, e.g., the mark information in the target object is displayed on the ground, the desktop, or any plane parallel to the ground in the current scene, etc. Further, when the target object is horizontally expanded along the horizontal direction of the scene, a subsequent user may superimpose the markup information in the target object on the scene through an interactive action, for example, when the markup information in the target menu bar is displayed on the desktop of the user, after the user selects a certain markup information by pinching, staring or clicking, the user takes the markup information out of the expanded target menu bar by an upward gesture, and then the user moves the hand to the target position, the markup information is moved and placed to the target position. Further, after the mark information is superimposed on the scene, the subsequent user can edit the mark information added in the scene through interactive actions, including but not limited to moving, zooming in, zooming out, deleting, copying, rotating the mark information, and the like. Therefore, the target object presentation method described in this embodiment changes the presentation method of the existing target object, and also enhances the interaction function of the mark information in the target object, thereby improving the characteristics of the related products.
In an embodiment, when it is detected that the condition for changing the presentation state of the target object in the scene is satisfied, including selecting a virtual handle in the scene, and performing a drawing action, as shown in fig. 5, outputting the response result of the target object as displaying an icon corresponding to the target object or an expanded target object. In this embodiment, when it is detected by the user equipment (e.g., the image capturing device and/or the detection device) that the condition for changing the presentation state of the target object in the scene includes selecting a virtual handle in the scene, and performing a pull-out action, a response result of the target object is output: and displaying the icon corresponding to the target object or the expanded target object. The virtual handle is displayed in the user view field, may be initially displayed in the user view field, or may be displayed in the user view field according to a specific gesture action of the user, or may be displayed in the user view field through other interactive actions such as a voice command, an eye movement action, a head movement action, and a touch action, or may be displayed in the user view field after a specific identifier is recognized, which is not limited herein. The virtual handle display position can be preset or can be changed in real time, for example, a user wears smart glasses, a current scene is checked through the smart glasses, the virtual handle is fixedly displayed at a preset position in a display screen of the smart glasses, such as the virtual handle is fixedly displayed at the edge of the screen and the like and does not change along with the movement of the user, and for example, the smart glasses recognize a specific identifier in the current scene, and superpose and display the virtual handle according to the position of the identifier in the display screen of the smart glasses, and when the user moves in the scene, the position of the identifier in the screen of the smart glasses changes in real time, so that the superposition position of the virtual handle in the display screen also changes along with the position. When the virtual handle is displayed in the user view field, a user can select the virtual handle through interactive action and perform a drawer pulling action, and the response result of the output target object can be an icon corresponding to the display target object or an expanded target object, namely, mark information in the display target object. For example, when the user selects the virtual handle by pinching, staring, frame selecting, or clicking, and the like, and performs a drawer pulling action, the target menu bar is expanded in the vertical direction of the scene, and the mark information in the target menu bar is displayed. Further, when the tag information in the target object is displayed, a subsequent user may superimpose the tag information in the target object into the scene through an interactive action. Further, after the mark information is superimposed on the scene, the subsequent user can edit the mark information added in the scene through interactive actions, including but not limited to moving, zooming in, zooming out, deleting, copying, rotating the mark information, and the like. Therefore, the target object presenting mode has more interest, and the immersive experience of the user is enhanced, so that the viscosity of the user is improved.
In some embodiments, when it is detected that a condition for changing a presentation state of a target object in the scene is satisfied includes any one of the following, the outputting of the response result of the target object is to close the target object or to hide the target object; wherein any one of the above is: the palms of the hands slide in an arc towards the inner direction; the hands slide inwards and close; and selecting the virtual handle in the scene to perform a drawer pushing action. In this embodiment, when it is detected by the user equipment (such as the image capturing device and/or the detection device) that the condition for changing the presentation state of the target object in the scene is satisfied includes any one of the following: the palms of the hands slide in an arc towards the inner direction; the hands slide inwards and close; and when the virtual handle in the scene is selected and the action of pushing the drawer is performed, the response result of the output target object can be a closed target object or a hidden target object. When the icon corresponding to the target object or the expanded target object is displayed in the scene, when the condition that the palms of the hands of the user slide inwards in an arc way, the hands slide inwards and are combined or the virtual handles in the scene are selected, and the drawer pushing action is performed, the response result of the target object is output to be the closed target object or the hidden target object. For example, when an expanded target menu bar (i.e., label information in the target menu bar) is displayed in a current scene, when the palms of the two hands of the user slide in an arc in the inward direction, the expanded target menu bar is closed, an icon corresponding to the target menu bar is displayed in the scene at this time, and the label information in the target menu bar is not displayed; for another example, when an icon corresponding to the target menu bar or an expanded target menu bar is displayed in the current scene, and when the two hands of the user slide inwards and are held together, the target menu bar is hidden, so that the marking information of the target menu bar or the icon of the target menu bar is not displayed in the current scene; for another example, the expanded target menu bar is displayed in the current scene, when the user selects the virtual handle by pinching, frame selection, staring or clicking and the like to perform a drawer pushing action, the target menu bar is hidden, and at this time, the mark information in the target menu bar and the icon of the target menu bar are not displayed in the scene.
In some embodiments, after the response result of the target object is output, the interactive action made by the user on the target object is acquired; determining user input according to the acquired interaction; and outputting a response result of the target object based on the determined user input. Herein, the interaction includes, but is not limited to, a click command, a touch command, a gesture command, a voice command, a key command, a head movement command, an eye movement command, a body movement command, and the like. The interaction may be one type of interaction, a combination of types of interaction, or the like. The gesture may be specific to a finger action or a palm action, but is not limited thereto. Since the user equipment can be internally or externally connected with the camera device and/or the detection device, the interaction can be obtained through the camera device and/or the detection device, and the obtained interaction is analyzed through the processing device which is internally or externally connected with the user equipment to determine the user input. Further, based on the acquired interaction, the following method may be included in the process of determining the user input: analyzing the acquired interaction to identify a set of points associated with the interaction; the identified set of points is compared to a set of points associated with the predetermined interaction to determine the user input. Specifically, the interaction is first acquired by the camera and/or the detection means, and the acquired interaction is analyzed by the processing means built in or external to the user equipment to identify a set of points associated with the interaction. The user device may then compare the identified set of points with a set of points stored in a storage means and associated with a predetermined interaction, and may determine a user input depending on the comparison. The input of the user can be determined simply and quickly by the method. After determining the user input, the user device may also quickly output a response result of the target object.
Therefore, in the above embodiment, after the response result of the target object is output (for example, an icon corresponding to the target menu bar or an expanded target menu bar is presented), the target object may continue to be interacted, that is, the second interaction, the third interaction, or multiple interactions, and the input of the user is determined according to the interaction, and finally the response result of the target object is output.
Further, the user may operate an icon of the target object or the expanded target object, select the designated mark information from the target object, and perform an interactive action on the mark information by a specific gesture such as clicking, pinching, or frame selection, or a voice command, an eye movement action, a head movement action, a touch action, or the like, to realize a user input of the mark information, including but not limited to selecting, moving, enlarging, reducing, deleting, copying, and rotating the mark information.
According to the target object presenting method, by executing the steps S100 to S300, a friendly augmented reality interaction mode can be provided to enhance the immersive experience of a user, the whole interaction process is simple and rapid, multiple modes can be provided for the presentation of the target object, and the color characteristics of related products are further improved.
In order to better implement the method, the embodiment of the application also provides a presentation device of the target object. The target object presenting device may be integrated in a user device, where the user device includes, but is not limited to, any mobile electronic product capable of performing human-computer interaction with a user, such as a smart phone, a tablet computer, a head-mounted device, and the like, and the mobile electronic product may employ any operating system, such as an Android operating system, an iOS operating system, and the like.
Fig. 6 is a schematic structural diagram of a target object presenting apparatus according to an embodiment of the present application. As shown in fig. 6, the presentation apparatus of the target object includes a scene acquisition module 1100, a condition detection module 1200, and a result output module 1300.
The system includes (1) a scene acquiring module 1100, configured to acquire a scene in a field of view of a user.
It should be noted that the scene in the user field of view is the real scene seen by the user. The scenes may include work scenes, life scenes, indoor and outdoor scenes, entertainment scenes, and the like. Herein, a target object includes a target menu bar (or tag library). The content of the target menu bar includes one or more marking information (or called virtual controls, or called labels, etc.). In other words, the target menu bar may also be referred to as a collection of markup information. The tagged information includes objects that may be added to the scene that are not actually present in the scene. Further, the mark information can be displayed in the current scene by the current user or other users in an overlapping manner.
(2) A condition detecting module 1200, configured to detect whether a condition for changing a presentation state of the target object in the scene is satisfied in real time.
The method for detecting whether the condition for changing the presentation state of the target object in the scene is met in real time includes, but is not limited to, a method for detecting interactive instructions such as a click instruction, a touch instruction, a gesture instruction, a voice instruction, a key instruction, a head movement instruction, an eyeball movement instruction and the like, for example, a gaze direction and a gaze duration of an eyeball of a user can be detected through a sensor, such as an eye tracker, a camera device, an infrared device and the like; for another example, a sensor may detect a particular gesture of a user, such as a camera, a depth camera, a data glove, an optical marker, etc.; for another example, the voice of the user can be detected by the voice collector and a voice instruction, such as a microphone, is determined by a voice recognition algorithm; as another example, and head movements of the user may be detected by a head pose sensor, such as a magnetic sensor, an inertial sensor, an optical motion capture system, and the like. The sensors or devices may be built in or externally connected to the user equipment, and use their built-in software and/or hardware modules to recognize and determine the input command for the target object.
(3) A result output module 1300, configured to output a response result of the target object when it is detected that a condition for changing a presentation state of the target object in the scene is satisfied.
Further, the result output module 1300 is further configured to display an icon corresponding to the target object; expanding the target object; closing the target object; hiding the target object.
Here, the icon corresponding to the target object refers to an icon corresponding to a target menu bar. Displaying the icon corresponding to the target object includes displaying the icon corresponding to the target object in the current scene, for example, the current scene does not display the target menu bar, and through user interaction, the icon corresponding to the target menu bar is displayed in the current scene; the expanding the target object includes displaying the expanded target object in the current scene, so that all or part of the mark information in the target object is displayed in the current scene, for example, the target menu bar is not displayed in the current scene or the mark information corresponding to the target menu bar is displayed in the current scene through a user interaction action, and for example, an icon corresponding to the target menu bar is displayed in the current scene, and the target menu bar is expanded so that all or part of the mark information in the target menu bar is displayed in the current scene through the user interaction action; closing the target object comprises closing the expanded target object, so that all or part of the mark information displayed in the current scene is not displayed, for example, all or part of the mark information of the target menu bar is displayed in the current scene, only the icon of the target menu bar is displayed through user interaction, and the mark information in the target menu bar is not displayed; the hiding of the target object includes that the target object or mark information corresponding to the target object is not displayed in the current scene, so that all or part of mark information displayed in the current scene or the displayed target object is not displayed, for example, all or part of mark information of the target menu bar is displayed in the current scene, or an icon of the target menu bar is displayed in the current scene, and the mark information of the target menu bar or the icon of the target menu bar is not displayed in the current scene through user interaction. Further, when an interactive action (for example, a specific gesture action such as clicking, pinching, or frame selection, or a voice command, an eye movement action, a head movement action, a touch action, a key action, or the like) is performed on an icon corresponding to the displayed target menu bar, the target menu bar may be expanded. The target menu bar may be expanded in the horizontal direction of the current scene or may be expanded in the vertical direction of the current scene. It should be noted that, after the icon corresponding to the target menu bar is displayed in the user scene, the icon corresponding to the target menu bar may be further operated, and then the target menu bar may be expanded along the horizontal direction or the vertical direction of the current scene, for example, the horizontal direction or the vertical direction of the current scene viewed by the user. After the target menu bar is expanded, the mark information in the target menu bar is displayed. Of course, the target menu bar may be directly expanded in the horizontal direction or the vertical direction of the current scene without displaying the icon corresponding to the target menu bar in advance. Further, after the mark information is superimposed on the scene, a subsequent user may edit the mark information added in the scene through interactive actions, including but not limited to moving, enlarging, reducing, deleting, copying, rotating the mark information, and the like, and may output a response result of the mark information through a display device built in or externally connected to the user equipment based on the interactive actions.
The scene acquisition module 1100, the condition detection module 1200 and the result output module 1300 may be respectively built in corresponding devices of the user equipment, for example, the scene acquisition module 1100 may be disposed in an image capturing device of the user equipment, the condition detection module 1200 may be disposed in a detection device (e.g., various sensors) and a processing device (e.g., a processor, a memory) of the user equipment, and the result output module 1300 may be disposed in a display device of the user equipment. Of course, the scene acquisition module 1100, the condition detection module 1200 and the result output module 1300 may also be integrally disposed on a presentation device of the target object, and the presentation device is externally connected to the user equipment.
The target object presenting device 1000 can provide a friendly augmented reality interaction mode by matching use of the modules or units, so that immersive experience of an enhanced user can be enhanced, the whole interaction process is simple and fast, multiple modes can be provided for presentation of the target object, and the color specificity of related products is further improved.
In addition, in an embodiment of the present application, an electronic device 5000 is also provided, as shown in fig. 7. The electronic device 5000 may include at least one processor 5100 and at least one memory 5200. Those skilled in the art will appreciate that the electronic device 5000 shown in fig. 7 does not constitute a limitation of electronic devices and may include more or fewer components than shown, or some components in combination, or a different arrangement of components. Wherein:
the processor 5100 is a control center of the electronic device 5000, and performs various functions of the electronic device 5000 and processes data by running or executing software programs and/or modules stored in the memory 5200 and calling data stored in the memory 5200, thereby monitoring the electronic device 5000 as a whole. Optionally, processor 5100 may include one or more processing cores; preferably, the processor 5100 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor described above may not be integrated into the processor 5100.
The memory 5200 may be used to store software programs and modules, and the processor 5100 executes various functional applications and data processing by executing the software programs and modules stored in the memory 5200 to implement various functions, such as:
acquiring a scene in a user field of view;
detecting whether a condition for changing the presentation state of a target object in the scene is met in real time; and
and when the condition of changing the presentation state of the target object in the scene is detected to be met, outputting a response result of the target object.
It will be understood by those skilled in the art that all or part of the steps of the method described in the above embodiments may be performed by instructions or by instructions controlling associated hardware, and the instructions may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, embodiments of the present application provide a computer-readable storage medium, in which a plurality of computer programs are stored, and the computer programs can be loaded by a processor to perform the steps in a method for training a disease risk prediction model provided in any of the embodiments of the present application. For example, the computer program may perform the steps of:
acquiring a scene in a user field of view;
detecting whether a condition for changing the presentation state of a target object in the scene is met in real time; and
and when the condition of changing the presentation state of the target object in the scene is detected to be met, outputting a response result of the target object.
The above operations can be implemented in the foregoing embodiments, and are not described in detail herein.
Since the instructions stored in the computer-readable storage medium can execute the steps in the method for presenting a target object provided in any embodiment of the present application, the beneficial effects that can be achieved by the method for presenting a target object provided in any embodiment of the present application can be achieved, for details, see the foregoing embodiments, and are not described herein again.
The method, the apparatus, the electronic device, and the storage medium for presenting a target object provided in the embodiments of the present application are described in detail above, and a specific example is applied in the present application to explain the principle and the implementation of the present application, and the description of the above embodiments is only used to help understand the technical solution and the core idea of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (20)

1. A method for presenting a target object, comprising:
acquiring a scene in a user field of view;
detecting whether a condition for changing the presentation state of a target object in the scene is met in real time; and
and when the condition of changing the presentation state of the target object in the scene is detected to be met, outputting a response result of the target object.
2. The method of claim 1, wherein the outputting the response result of the target object comprises any one of:
displaying an icon corresponding to the target object;
expanding the target object;
closing the target object;
hiding the target object.
3. The method for presenting a target object according to claim 2, wherein when the outputting of the response result of the target object includes displaying an icon corresponding to the target object, the target object is expanded by operating the displayed icon of the target object.
4. A method of rendering a target object as claimed in claim 2 or 3, wherein the expanding the target object comprises expanding the target object horizontally or expanding the target object vertically.
5. The method for presenting a target object according to claim 2, wherein when it is detected that a condition for changing a presentation state of a target object in the scene is satisfied, the outputting of the response result of the target object is displaying an icon corresponding to the target object or expanding the target object; wherein any one of the above is:
recognizing a preset recognition object;
making a sound-making finger action;
throwing the hands into the air and then dropping the hands;
opening the palm;
the palms of the hands slide in an arc outward direction;
the hands slide outwards;
and selecting the virtual handle in the scene to do the action of drawing the drawer.
6. The method for presenting a target object according to claim 5, wherein before the step of detecting in real time whether a condition for changing the presentation state of the target object in the scene is satisfied, the method comprises:
making the identification object;
configuring the recognition object to associate with the target object;
arranging the recognition object in the scene.
7. The method of claim 5, wherein the identified object is movable or stationary.
8. The method of claim 5, wherein the identified objects are one or more.
9. The method of claim 5, wherein the identification object comprises a magnet-based identification sticker or an adhesive-based identification sticker.
10. The method for presenting a target object of claim 5, wherein the recognition object is provided with a built-in chip or sensor for storing spatial information of the recognition object in real time.
11. The method for presenting a target object according to any one of claims 5 to 10, wherein the outputting the response result of the target object as displaying an icon corresponding to the target object or expanding the target object further comprises displaying the target object superimposed on a surface of the identification object or located around the identification object.
12. The method of claim 5, wherein when detecting that a condition for changing a presentation state of a target object in the scene is satisfied includes a user making a finger-ringing action, outputting a result of the response of the target object as displaying an icon corresponding to the target object or expanding the target object further includes displaying that the target object is located on a finger of the user.
13. The method of claim 5, wherein when detecting that a condition for changing a presentation state of a target object in the scene is satisfied includes a hand being thrown into the air and then falling down, outputting a result of the response of the target object as displaying an icon corresponding to the target object or expanding the target object further includes displaying that the target object is located in a hand of a user.
14. The method for presenting a target object according to claim 5, wherein when it is detected that the condition for changing the presentation state of the target object in the scene is satisfied includes opening a palm, outputting the result of the response of the target object as displaying an icon corresponding to the target object or expanding the target object further includes displaying that the target object is located on a hand of a user or on an open finger.
15. The method according to claim 5, wherein when it is detected that a condition for changing a presentation state of a target object in the scene is satisfied, including a volar sliding in an outward direction or a sliding in an outward direction, outputting a result of the response of the target object as displaying an icon corresponding to the target object or expanding the target object further includes the target object being expanded laterally in a horizontal direction of the scene.
16. The method for presenting a target object according to claim 2, wherein when it is detected that a condition for changing a presentation state of a target object in the scene is satisfied includes any one of closing the target object or hiding the target object; wherein any one of the above is:
the palms of the hands slide in an arc towards the inner direction;
the hands slide inwards and close;
and selecting the virtual handle in the scene to perform a drawer pushing action.
17. The presentation method of a target object according to claim 1 or 2, characterized in that after the response result of the target object is output, the interactive action made by a user on the target object is acquired; determining user input according to the acquired interaction; and outputting a response result of the target object based on the determined user input.
18. An apparatus for rendering a target object, comprising:
the scene acquisition module is used for acquiring a scene in a user field of view;
the condition detection module is used for detecting whether a condition for changing the presentation state of the target object in the scene is met in real time; and
and the result output module is used for outputting the response result of the target object when detecting that the condition for changing the presentation state of the target object in the scene is met.
19. An electronic device, comprising a memory in which a computer program is stored and a processor that executes a presentation method of a target object according to any one of claims 1 to 17 by calling the computer program stored in the memory.
20. A computer-readable storage medium, characterized in that it stores a computer program adapted to be loaded by a processor for performing the method of rendering a target object according to any one of claims 1-17.
CN202111626260.6A 2021-12-28 2021-12-28 Target object presenting method and device, electronic equipment and storage medium Pending CN114296551A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111626260.6A CN114296551A (en) 2021-12-28 2021-12-28 Target object presenting method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111626260.6A CN114296551A (en) 2021-12-28 2021-12-28 Target object presenting method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114296551A true CN114296551A (en) 2022-04-08

Family

ID=80970738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111626260.6A Pending CN114296551A (en) 2021-12-28 2021-12-28 Target object presenting method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114296551A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915418A (en) * 2016-05-23 2016-08-31 珠海格力电器股份有限公司 Method and device for controlling household appliance
CN113672158A (en) * 2021-08-20 2021-11-19 上海电气集团股份有限公司 Human-computer interaction method and device for augmented reality

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105915418A (en) * 2016-05-23 2016-08-31 珠海格力电器股份有限公司 Method and device for controlling household appliance
CN113672158A (en) * 2021-08-20 2021-11-19 上海电气集团股份有限公司 Human-computer interaction method and device for augmented reality

Similar Documents

Publication Publication Date Title
CN109952610B (en) Selective identification and ordering of image modifiers
KR102508924B1 (en) Selection of an object in an augmented or virtual reality environment
CN105229566B (en) Indicating observations or visual patterns in augmented reality systems
KR20210046591A (en) Augmented reality data presentation method, device, electronic device and storage medium
JP7139436B2 (en) Object creation using physical manipulation
CN105981076B (en) Synthesize the construction of augmented reality environment
KR101784328B1 (en) Augmented reality surface displaying
US9437038B1 (en) Simulating three-dimensional views using depth relationships among planes of content
JP5942456B2 (en) Image processing apparatus, image processing method, and program
US11217020B2 (en) 3D cutout image modification
JP5807686B2 (en) Image processing apparatus, image processing method, and program
US20140129935A1 (en) Method and Apparatus for Developing and Playing Natural User Interface Applications
US20150091903A1 (en) Simulating three-dimensional views using planes of content
CN106355153A (en) Virtual object display method, device and system based on augmented reality
JP2013025789A (en) System, method and program for generating interactive hot spot of gesture base in real world environment
CN113449696B (en) Attitude estimation method and device, computer equipment and storage medium
JP2022153514A (en) Browser for mixed reality systems
JP2014531693A (en) Motion-controlled list scrolling
US11893696B2 (en) Methods, systems, and computer readable media for extended reality user interface
CN108027655A (en) Information processing system, information processing equipment, control method and program
Hu et al. Dt-dt: top-down human activity analysis for interactive surface applications
US20220253808A1 (en) Virtual environment
US10402068B1 (en) Film strip interface for interactive content
US20210216349A1 (en) Machine interaction
Roccetti et al. Day and night at the museum: intangible computer interfaces for public exhibitions

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201210 7th Floor, No. 1, Lane 5005, Shenjiang Road, China (Shanghai) Pilot Free Trade Zone, Pudong New Area, Shanghai

Applicant after: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

Address before: Room 501 / 503-505, 570 shengxia Road, China (Shanghai) pilot Free Trade Zone, Pudong New Area, Shanghai, 201203

Applicant before: HISCENE INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information