CN117834843A - Recording method, recording device, electronic apparatus, recording medium, and computer program product - Google Patents
Recording method, recording device, electronic apparatus, recording medium, and computer program product Download PDFInfo
- Publication number
- CN117834843A CN117834843A CN202211194299.XA CN202211194299A CN117834843A CN 117834843 A CN117834843 A CN 117834843A CN 202211194299 A CN202211194299 A CN 202211194299A CN 117834843 A CN117834843 A CN 117834843A
- Authority
- CN
- China
- Prior art keywords
- notepad
- content information
- augmented reality
- user
- acquiring
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 71
- 238000004590 computer program Methods 0.000 title claims abstract description 21
- 230000003190 augmentative effect Effects 0.000 claims abstract description 90
- 230000000694 effects Effects 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 10
- 238000010276 construction Methods 0.000 claims description 5
- 230000004048 modification Effects 0.000 claims description 5
- 238000012986 modification Methods 0.000 claims description 5
- 238000012217 deletion Methods 0.000 claims description 2
- 230000037430 deletion Effects 0.000 claims description 2
- 239000000758 substrate Substances 0.000 claims 1
- 230000006870 function Effects 0.000 description 28
- 238000010586 diagram Methods 0.000 description 15
- 238000004458 analytical method Methods 0.000 description 12
- 230000004438 eyesight Effects 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 230000007613 environmental effect Effects 0.000 description 5
- 238000012545 processing Methods 0.000 description 5
- 230000009471 action Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 230000033001 locomotion Effects 0.000 description 4
- 238000001179 sorption measurement Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000004088 simulation Methods 0.000 description 2
- 241001122767 Theaceae Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 230000008713 feedback mechanism Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000007654 immersion Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000009877 rendering Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/373—Image reproducers using viewer tracking for tracking forward-backward translational head movements, i.e. longitudinal movements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/376—Image reproducers using viewer tracking for tracking left-right translational head movements, i.e. lateral movements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
- H04N13/383—Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present disclosure relates to a recording method, apparatus, electronic device, storage medium and computer program product, the method comprising: constructing an environment map in the augmented reality space; acquiring note content information and storing the note content information to a note book; and fixing the notepad in the environment map. By adopting the technical scheme provided by the embodiment of the disclosure, a user can fix the notepad at a proper and striking position in the augmented reality space according to the need, thereby achieving the purpose of pertinently and intelligently reminding the user and making up the blank of the augmented reality equipment in the aspect.
Description
Technical Field
The present disclosure relates to the field of augmented reality technology, and in particular, to a recording method, apparatus, electronic device, storage medium, and computer program product.
Background
The See-Through refers to a function of directly or indirectly observing things in a real environment where a user is located Through an augmented reality device when the user wears the augmented reality device, and is also commonly referred to as a "See-Through function". The See-Through not only can facilitate the user to know the relative position of the user and the boundary of the augmented reality equipment without removing the augmented reality equipment, but also can easily return to the center origin and sense the real environment (such as searching a mobile phone, signing in and delivering and the like). Which can increase the sustainability of the augmented reality device experience.
When a user uses the See-Through function of the augmented reality device, many pieces of things are always required to be recorded and processed. However, some things are easy to forget due to the fact that the user is busy, so that important things are omitted, and unnecessary trouble is caused to work or life. In this case, the user needs to record by means of a note application of a planar electronic device such as a mobile phone. The note mode can play a role in reminding the user only when the user views the planar electronic equipment again, and the purpose of intelligently reminding the user cannot be achieved. Whereas current augmented reality devices do not have this functionality.
Disclosure of Invention
To solve or at least partially solve the above technical problems, the present disclosure provides a recording method, apparatus, electronic device, storage medium, and computer program product.
In a first aspect, the present disclosure provides a method of dating, comprising:
constructing an environment map in the augmented reality space;
acquiring note content information and storing the note content information to a note book;
and fixing the notepad in the environment map.
In a second aspect, the present disclosure also provides a notepad device comprising:
the map construction module is used for constructing an environment map in the augmented reality space;
the acquisition module is used for acquiring the notepad content information and storing the notepad content information to a notepad;
and the fixing module is used for fixing the notepad in the environment map.
In a third aspect, the present disclosure also provides an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the story method as described above.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a note taking method as described above.
In a fifth aspect, the present disclosure also provides a computer program product comprising a computer program or instructions which, when executed by a processor, implements a note taking method as described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
The technical scheme provided by the embodiment of the disclosure is that an environment map is constructed in an extended reality space; acquiring note content information and storing the note content information to a note book; the notepad is fixed in the environment map. Because in this disclosure, the notepad is similar to sticky note in the real world, can be fixed in any position in the environment map, namely the user can fix the notepad in the appropriate and striking position in the augmented reality space according to the needs, the purpose of reminding the user in a targeted and intelligent way is achieved, and the blank of the augmented reality device in this aspect is made up.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a block diagram of an augmented reality device according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an application scenario of an augmented reality device according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of another augmented reality device provided by an embodiment of the present disclosure;
FIG. 4 is a flow chart of a method of recording events provided in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a note device according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
Extended Reality (XR for short) refers to that a virtual environment capable of man-machine interaction is created by combining Reality with virtual through a computer, which is also collectively called as multiple technologies such as AR, VR, MR and the like. By integrating the visual interaction technologies of the three, the method brings the 'immersion' of seamless transition between the virtual world and the real world for the experienter.
The augmented reality device is a terminal capable of realizing an augmented reality effect, and is generally provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of realization of the augmented reality device is not limited to this, and can be further miniaturized or enlarged as needed.
The augmented reality device may create a virtual scene. A virtual scene is a virtual scene that an application program displays (or provides) when running on an electronic device. The virtual scene may be a simulation environment for the real world, a semi-simulation and semi-fictional virtual scene, or a pure fictional virtual scene. The virtual scene may be any one of a two-dimensional virtual scene, a 2.5-dimensional virtual scene or a three-dimensional virtual scene, and the dimension of the virtual scene is not limited in the embodiment of the present application.
Fig. 1 is a block diagram of an augmented reality device according to an embodiment of the present disclosure. Referring to fig. 1, exemplary augmented reality device main functional modules may include, but are not limited to, the following: 1) Detection (module): detecting operation commands of a user by using various sensors, and acting on a virtual environment, such as continuously updating images displayed on a display screen along with the sight of the user, so as to realize interaction between the user and the virtual scene, for example, continuously updating display contents based on the detected rotation direction of the head of the user; 2) Feedback (module): receiving data from the sensor, providing real-time feedback to the user; 3) A sensor: on one hand, an operation command from a user is accepted and acted on a virtual scene; on the other hand, the result generated after the operation is provided for the user in various feedback forms; 4) And the control module is used for: controlling the sensors and various input/output devices, including obtaining user data (e.g., motion, speech) and outputting sensory data, such as images, vibrations, temperature, sounds, etc., to effect the user, virtual environment, and the real world; 5) Modeling module: constructing a three-dimensional model of a virtual environment may also include various feedback mechanisms such as sound, touch, etc. in the three-dimensional model.
In an augmented reality scene, user selection of an object (e.g., virtual object, virtual control, etc.) or region in a virtual scene may be accomplished by a controller, which may be a handle, that the user selects the object or region by operation of a key of the handle. Of course, in other embodiments, the selection of objects or regions in the augmented reality device may be performed using gestures or voice instead of using the controller.
Fig. 2 is a schematic diagram of an application scenario of an augmented reality device according to an embodiment of the present disclosure. In fig. 2, the augmented reality device is a head mounted display. Referring to fig. 2, a sensor (e.g., a nine-axis sensor) for detecting the posture change of the augmented reality device is provided in the augmented reality device, and if the user wears the augmented reality device, when the posture of the head of the user changes, the real-time posture of the head is transmitted to the processor, so as to calculate the gaze point of the line of sight of the user in the virtual environment, and an image in the gaze range (i.e., virtual field of view) of the user in the three-dimensional model of the virtual environment is calculated according to the gaze point, and is displayed on the display screen, so that the user can see the immersive experience as if the user were in the real environment.
Fig. 3 is a schematic diagram of another augmented reality device according to an embodiment of the present disclosure. Referring to fig. 3, the augmented reality device 1 is integrated with several image sensors 11 (e.g., depth camera, RGB camera, etc.). Here, the purpose of providing the image sensor is not limited to providing only an image to be displayed. The image acquired by the image sensor 11 and the measurement data acquired by an Inertial Measurement Unit (IMU) integrated in the augmented reality device 1 can be converted into data for assisting the augmented reality device in understanding the environment by a computer vision analysis method. Also, the augmented reality device is designed to support not only the passive computer vision analysis method, but also the active computer vision analysis method. The passive computer vision analysis method captures image information from the environment. These methods may be monoscopic (images from a single image sensor) or stereoscopic (images from two or more image sensors). The passive computer-based visual analysis methods may include, but are not limited to, feature tracking, object recognition, and depth estimation. Active computer vision analysis methods add information to the environment by projecting a pattern that is visible to the image sensor but not necessarily to the human vision system. Such techniques include time-of-flight (ToF) cameras, laser scanning, or structured light to simplify stereo matching issues. The active computer vision analysis method is used for realizing scene depth reconstruction. Infrared (IR) projectors are used to project random IR speckle patterns onto the environment, adding texture information to make stereo matching easier where ambiguities (e.g., uniform textures or surfaces) are. A ToF camera may also be included in some embodiments. For low light or no light conditions, active computer vision analysis methods are used to support tracking with IR floodlights.
Computer vision analysis methods use data from sensors to automatically track head position, user body, and environment. In practice, computer vision analysis methods and graphics rendering may be done primarily on an external computer, but of course may also be handled by a self-integrated GPU, but the augmented reality device must minimally perform camera Image Signal Processing (ISP) functions such as synchronization, combining, bayer decoding, correction of image distortion for display and MR synthesis of rendered graphics and camera images. Augmented reality devices are designed to include the necessary components to apply passive or active stereoscopic analysis methods to achieve position tracking, user body tracking, and environmental tracking. The augmented reality device may also be compatible with some third party external transmitters that add visual information to the environment. For example, any projection of the texture pattern onto the environment may aid in stereo matching. Actual tracking algorithms typically involve stereo matching, IMU data integration, feature detection/tracking, object recognition, and surface fitting.
The function of the augmented reality device is specifically a function of directly or indirectly observing things in the real environment of the user Through the augmented reality device when the user wears the augmented reality device, and is also generally referred to as a "perspective function". The se-Through function typically includes two types, one for optical perspective and one for video perspective. Optical perspective refers to the fact that a user can directly view the real environment through optical elements (e.g., holographic waveguides and other systems that can be graphically superimposed in the real world). The video perspective refers to an environmental image of a real environment captured by one or more image sensors installed on the augmented reality device, then the environmental images captured by the image sensors are subjected to computer vision analysis to obtain a left eye image and a right eye image, the left eye image is displayed by using a left eye display screen of the augmented reality device, the right eye image is displayed by using a right eye display screen of the augmented reality device, and finally a user knows the real environment in which the user is positioned by watching the images displayed by the left eye display screen and the right eye display screen.
When a user uses the See-Through function of the augmented reality device, many pieces of things are always required to be recorded and processed. However, some things are easy to forget due to the fact that the user is busy, so that important things are omitted, and unnecessary trouble is caused to work or life. In this case, the user needs to record by means of a note application of a planar electronic device such as a mobile phone. The note mode can play a role in reminding the user only when the user views the planar electronic equipment again, and the purpose of intelligently reminding the user cannot be achieved. Whereas current augmented reality devices do not have this functionality.
In view of this, fig. 4 is a flowchart of a recording method provided in an embodiment of the present disclosure, where the embodiment may be applicable to information recording in a case of wearing an augmented reality device, and the method may be performed by the augmented reality device. Augmented reality devices include, but are not limited to, virtual reality devices, augmented reality devices, mixed reality devices, augmented virtual devices, and the like.
In this application, a user refers to the wearer of an augmented reality device.
As shown in fig. 4, the method specifically may include:
S110, constructing an environment map in the augmented reality space.
Optionally, an environment map is constructed in the augmented reality space using instant localization and map creation (Simultaneous Localization and Mapping, SLAM) techniques.
The environment map constructed in this step may be a virtual map or a real map.
A virtual map refers to a map that is not related to the real world. Virtual maps are often created according to user needs. For example, when a user plays a room decorating game by using an augmented reality device, in a certain scene, a room in which the user is located is an empty room, and no object exists in the room; however, in the real world, as users play in living rooms in their homes, there are sofas, tea tables, etc. in the room.
The real map is a map capable of reflecting the real world. The real map is a real world map. The objects in the real map correspond one-to-one with the objects in the environment of the user in the real world.
S120, acquiring note content information and storing the note content information to a note book.
The story content information refers to information that the user wishes to record. The present application does not limit the type of story information. Illustratively, the noteworthy content information includes at least one of: text, pictures, audio, video, 3D objects or functional components.
Wherein, 3D object refers to virtual three-dimensional object. The functional components are simple packages of data and methods that have specific functions. Illustratively, the functional component includes an alarm clock component, a calendar component, or a map component.
Accordingly, there are a variety of input modes that may be used to implement the input of noteworthy content information, which the present application is not limited to, and exemplary input modes for implementing the input of noteworthy content information include, but are not limited to: text input; calling a microphone to collect voice to form audio, and inputting the audio in an audio form; calling a microphone to collect voice, converting the voice into text, and inputting the text in a text form; calling a camera to shoot, forming a video, and inputting the video in a video form; downloading file input from a network; selecting a file input from the local file; and calling an application program component. Here, files include, but are not limited to, text, pictures, audio, and video.
Based on this, optionally, the implementation method of "acquiring the story content information" includes at least one of the following: acquiring first text information, and taking the first text information as notepad content information; acquiring first audio information acquired by audio acquisition equipment, and taking the first audio information as notepad content information; acquiring second audio information acquired by the audio acquisition equipment, converting the second audio information into second text information, and taking the second text information as note content information; acquiring image information acquired by image acquisition equipment, and taking the image information as note content information; acquiring a file downloaded from the Internet, and taking the file as notepad content information; acquiring a local file, and taking the local file as notepad content information; and acquiring the application program component and taking the application program component as the note content information. Wherein, audio acquisition equipment and image acquisition equipment integrate in the augmented reality equipment.
Notepads are containers for holding and displaying notepad content information, which may in practice be provided in the form of a notebook, in the form of a sticky note, in the form of a pop-up window, or in other forms.
S130, fixing the notepad in the environment map.
The notepad is fixed in the environment map, namely the notepad is bound with the environment map, and the position of the notepad in the environment map is not changed along with the movement of a user in the augmented reality space. In practice, notepads may be fixed in any location in the environment map, as this application is not limited in this regard.
There are various ways to implement this step, and this application is not limited thereto. Illustratively, a coordinate system is built in the augmented reality space in advance, and the coordinate system is bound with the environment map; the implementation method of the steps comprises the following steps: and determining the coordinate value of the notepad in the coordinate system, wherein the coordinate value of the notepad in the coordinate system is a fixed value.
The coordinate system is bound with the environment map, namely the coordinate value of any position in the environment map under the coordinate system is a fixed value, and the coordinate value is not changed along with the movement of a user in the augmented reality space. When the user moves in the augmented reality space, the coordinate values of the user in the coordinate system change. The coordinate value of the notepad in the coordinate system is determined, namely the notepad and the fixed coordinate value are given, so that the aim that the position of the notepad in the environment map is not changed along with the movement of a user in an extended reality space can be achieved.
In an environment map, a real object may be present at the location of the notepad in the environment map; virtual objects may also be present; or there is neither a real object nor a virtual object.
Illustratively, a user has a microwave oven in his home, and the user can use the See-Through function while wearing the augmented reality device, save the instructions of the microwave oven as notepad information to the notepad, and fix the notepad to the microwave oven or a location near the microwave oven. Thus, the user can check the use instruction (namely, note content information) of the microwave oven nearby the microwave oven when the microwave oven is used by the subsequent user.
For another example, when the user wears the augmented reality device and uses the See-Through function, the user can complete the work of building the 3-dimensional model using the virtual material in the office area, and when the 3-dimensional model is built in half, the user needs to immediately process the virtual material due to emergency, and the user can save the 3-dimensional model (here, the 3-dimensional model is a 3D object) in half as notepad content information, save the notepad information to the notepad, and fix the notepad to the office area. Thus, after the user returns to the office next time, half of the 3-dimensional model (namely the note content information) can be built before calling out, and the building work is continuously completed.
According to the technical scheme, the environment map is constructed in the augmented reality space; acquiring note content information and storing the note content information to a note book; the notepad is fixed in the environment map. Because in this application, the notepad is similar to sticky note in the real world, can be fixed in the optional position in the environment map, promptly the user can be as required be fixed in the appropriate, striking position in the augmented reality space with the notepad, reaches the purpose of pertinence, intelligent warning user, makes up the blank of augmented reality equipment in this aspect.
It should be emphasized that the above-described solution is applicable to cases where the function of se-Through is used, as well as to cases where the function of se-Through is not used. In the case of using the se-Through function, the constructed environment map is a real map. The constructed environment map is a virtual map without using the se-Through function.
In addition, in the case of using the See-Through function, when an environment map is constructed by using the technology of visual instant localization and map creation (i.e., visual SLAM), it is necessary to collect an environment image, whether in video perspective or optical perspective, which will be the basis for constructing the environment map. But for the case of video perspective, the method further comprises: performing de-distortion treatment and splicing treatment on the environment image to obtain a second image; the second image is displayed using a display screen of the augmented reality device. The purpose of this is that the user perceives the real environment by looking at the second image, i.e. the "see-through" work is achieved directly by means of the second image. The acquired environment image has two functions under the condition of video perspective, namely, the environment map is constructed, and the perspective is realized. In the case of optical perspective, the acquired environmental image is used to construct an environmental map, not used to realize perspective.
In some embodiments, the method further comprises: in response to a position adjustment operation on the notepad, the position of the notepad in the environment map is adjusted. Optionally, the user may adjust the position of the notepad by means of gestures, voice commands, or a controller. Since, in general, story content information tends to be thematic, and these themes tend to be associated with locations. The purpose of this arrangement is to allow the user to adjust the position of the notepad in the environment map, i.e. to allow the user to change the coordinate values of the notepad in the coordinate system so that the notepad is located at the position associated with the notepad content it includes, in order to alert the user in a convenient and targeted manner.
Illustratively, when a user wears the augmented reality device and uses the function of se-Through, the user can store the usage instruction of the microwave oven as notepad content information in a living room, store the notepad information in a notepad, and fix the notepad in the living room. But because in practice the need for the user to view the noteworthy content information arises more in a microwave oven scenario. Therefore, the user can move the notepad to the microwave oven of the kitchen through gestures, voice instructions or a controller and the like so as to check the using instruction of the microwave oven when the microwave oven is used later.
In other embodiments, optionally, the method further comprises: determining key points of an object in an augmented reality space; if the distance between the notepad and a key point is smaller than or equal to the first set distance, the notepad is moved to the key point. The purpose of this arrangement is to provide the object with the ability to hold a notepad on its key points. Under the condition that the adsorption function is selectively started, when the distance between the notepad and a key point is smaller than or equal to a first set distance, the notepad is adsorbed on the key point, so that the coordinate value of the notepad is consistent with that of the key point, the workload of adjusting the position of the notepad in an environment map can be reduced, and the augmented reality space is tidier.
There are various methods for determining the key points of the object in the augmented reality space, and the present application is not limited thereto. Illustratively, determining keypoints for an object in the augmented reality space comprises: a pre-specified keypoint in the object is identified.
Or, the environment map is a point cloud map, and determining key points of the object in the augmented reality space comprises: determining point cloud points for describing the shape of the object based on the point cloud map; the cloud points used for describing the shape of the object are taken as key points of the object.
On the basis of the technical schemes, optionally, the display state of the notepad comprises an unfolding state and a hiding state; the notepad is associated with an unfolding triggering condition and a hiding triggering condition; the method further comprises the steps of: when the unfolding trigger condition is met, setting the display state of the notepad as an unfolding state; when the hidden triggering condition is satisfied, the display state of the notepad is set to be a hidden state. The setting purpose is like this for this notepad is in the expansion state when needs remind the user, is in hidden state when need not remind the user, makes the warning mode more intelligent, and the augmented reality space is more clean and tidy.
The specific content of the unfolding triggering condition and the hiding triggering condition is not limited. In actual setting, setting can be performed according to the memo content information. For example, the user reserves an online consultation service of 5 months 2 numbers 13:30-14:00, before the consultation, the user lists a plurality of contents which need to be communicated with a doctor, and stores the contents as notepad content information to a notepad, sets an unfolding trigger condition to be in an online consultation service period (i.e. 5 months 2 numbers 13:30-14:00) at the current moment, and sets a hidden trigger condition to be not in the online consultation service period (i.e. 5 months 2 numbers 13:30-14:00) at the current moment. Thus, the notepad is in a hidden state when not in the on-line inquiry service time, and in an unfolded state when in the on-line inquiry service time period.
On the basis of the above technical solution, optionally, the notepad content information includes audio, and the audio is associated with an automatic playing triggering condition, and the method further includes: when the automatic playing triggering condition is met, automatically playing the audio; and/or, the note content information includes a video, the video being associated with an automatic play trigger condition, the method further comprising: when the automatic playing triggering condition is met, the video is automatically played.
Illustratively, a user has a microwave oven in his home, and the user can start the camera while wearing the augmented reality device and using the function of se-Through, and record the instruction audio of the microwave oven with reference to the instruction attached when purchasing the microwave oven. And the audio of the use instruction of the microwave oven is used as notepad information to be stored in the notepad, and the notepad is fixed on the microwave oven. And the user sets the automatic playing triggering condition of the audio association to be that the distance between the user and the microwave oven is smaller than a set distance threshold value. Subsequently, if at a certain moment, the user approaches the microwave oven, and when the distance between the user and the microwave oven is smaller than the set distance threshold, the audio-related automatic playing triggering condition is met, and the use instruction audio of the microwave oven is automatically played.
For another example, when a user wears an augmented reality device and uses the See-Through function, multiple daily recipes are compiled in video form, each daily recipe being displayed by one video. And sets an automatic play triggering condition associated with each daily recipe. The automatic play trigger conditions associated with each day recipe include the user entering the kitchen and the current date being consistent with the date corresponding to the recipe. Subsequently, when the user wears the augmented reality device and uses the function of See-Through, the recipe video of 7 months 2 is played when 7 months 2 enters the kitchen, and the recipe video of 7 months 3 is played when 7 months 3 enters the kitchen.
On the basis of the technical scheme, optionally, when the automatic playing triggering condition is met, the display state of the notepad is set to be an unfolding state, and the audio or video is automatically played in the notepad.
On the basis of the technical scheme, optionally, the method further comprises: when the fact that the user looks at the notepad is detected, and the distance between the user and the notepad is larger than the second set distance, the notepad is moved to a target area, and the target area takes the position of the user as the center and takes the third set distance as the radius area; the third set distance is smaller than the second set distance. The user gazes at the notepad, meaning that the user intends to browse the notepad content information recorded in the notepad, in which case the notepad is automatically moved from a place farther from the user to a place nearer to the user so that the user can see the notepad content information clearly.
On the basis of the above technical solutions, optionally, the method further includes at least one of the following: deleting the notepad in response to a delete instruction to the notepad; modifying the notepad content information in response to a modification instruction to the notepad content information; and adjusting the display effect of the notepad in response to the display effect adjustment instruction for the notepad.
The deleting instruction for the notepad can be generated based on the operation of the notepad by a user through gestures, voice instructions or a controller. Alternatively, a lifetime may be set for the notepad when the notepad is created, and a delete instruction for the notepad may be generated when the notepad lifetime end is reached. The modification instruction of the notepad content information in the notepad can be generated based on the operation of the notepad content information by a gesture, a voice instruction or a controller and the like. The display effect of the notepad refers to the visual presentation effect of the notepad in the augmented reality space. Such as the transparency of the notepad, the background color of the notepad, the shape of the notepad, the page layout of the notepad, etc. The display effect adjustment instruction for the notepad can be generated based on the operation of the notepad by a user through gestures, voice instructions or a controller.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Illustratively, in one embodiment, S130 in fig. 4 may be performed first, followed by S120. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
Fig. 5 is a schematic structural diagram of a recording device in an embodiment of the disclosure. The note device provided by the embodiment of the disclosure can be configured in the augmented reality equipment. Referring to fig. 5, the note device specifically includes:
a map construction module 310 for constructing an environment map in an augmented reality space;
the acquisition module 320 is configured to acquire notepad content information and store the notepad content information to a notepad;
and a fixing module 330 for fixing the notepad in the environment map.
Further, the noteworthy content information includes at least one of: text, pictures, audio, video, 3D objects or functional components.
Further, the obtaining module 320 is configured to obtain the notepad content information by at least one of the following ways:
acquiring first text information, and taking the first text information as the note content information;
acquiring first audio information acquired by audio acquisition equipment, and taking the first audio information as the note content information;
acquiring second audio information acquired by audio acquisition equipment, converting the second audio information into second text information, and taking the second text information as the note content information;
acquiring image information acquired by image acquisition equipment, and taking the image information as the note content information;
acquiring a file downloaded from the Internet, and taking the file as the notepad content information;
acquiring a local file, and taking the local file as notepad content information; the method comprises the steps of,
and acquiring an application program component, and taking the application program component as notepad content information.
Further, the device also comprises a coordinate system construction module, wherein the coordinate system construction module is used for constructing a coordinate system in the augmented reality space and binding the coordinate system with the environment map;
the fixing module 330 is configured to determine a coordinate value of the notepad in the coordinate system, where the coordinate value of the notepad in the coordinate system is a fixed value.
Further, the device also comprises a position adjustment module for adjusting the position of the notepad in the environment map in response to the position adjustment operation of the notepad.
Further, the device also comprises an adsorption module, wherein the adsorption module is used for:
determining key points of objects in the augmented reality space;
and if the distance between the notepad and the key point is smaller than or equal to a first set distance, moving the notepad to the key point.
Further, the environment map is a point cloud map, and the adsorption module is configured to:
determining point cloud points for describing the shape of the object based on the point cloud map;
and taking the point cloud point as a key point of the object.
Further, the display state of the notepad comprises an unfolding state and a hiding state; the notepad is associated with an unfolding triggering condition and a hiding triggering condition; the device also comprises a display state adjusting module, wherein the display state adjusting module is used for:
when the unfolding triggering condition is met, setting the display state of the notepad to be an unfolding state;
and when the hiding trigger condition is met, setting the display state of the notepad as a hiding state.
Further, the note content information includes audio, the audio is associated with an automatic play triggering condition, and the device further includes an automatic play module for: when the automatic playing triggering condition is met, automatically playing the audio;
and/or the number of the groups of groups,
the note content information comprises a video, wherein the video is associated with an automatic playing triggering condition, and the device further comprises an automatic playing module which is used for:
and when the automatic playing triggering condition is met, automatically playing the video.
Further, the position adjustment module is further configured to:
when the fact that the user looks at the notepad and the distance between the user and the notepad is larger than a second set distance is detected, the notepad is moved to a target area, and the target area takes the position of the user as the center and takes a third set distance as a radius area; the third set distance is less than the second set distance.
Further, the apparatus also includes an editing module for performing at least one of:
deleting the notepad in response to a deletion instruction of the notepad;
Modifying the notepad content information in response to a modification instruction to the notepad content information in the notepad; the method comprises the steps of,
and responding to a display effect adjustment instruction of the notepad, and adjusting the display effect of the notepad.
The recording device provided by the embodiment of the disclosure can execute steps executed by the augmented reality device in the recording method provided by the embodiment of the disclosure, has the executing steps and beneficial effects, and is not repeated here.
Fig. 6 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 6, a schematic diagram of an electronic device 1000 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 1000 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable electronic devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 6 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 6, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 1001 that may perform various suitable actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003 to implement a recording method of an embodiment as described in the present disclosure. In the RAM 1003, various programs and information necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
In general, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1007 including, for example, a Liquid Crystal Display (LCD), speaker, vibrator, etc.; storage 1008 including, for example, magnetic tape, hard disk, etc.; and communication means 1009. The communication means 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange information. While fig. 6 shows an electronic device 1000 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program comprising program code for performing the method shown in the flowcharts, thereby implementing the note taking method as described above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include an information signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with digital information communication (e.g., a communication network) in any form or medium. Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
constructing an environment map in the augmented reality space;
constructing a coordinate system in the augmented reality space, and binding the coordinate system with the environment map;
acquiring note content information and storing the note content information to a note book;
and fixing the notepad in the environment map.
Alternatively, the electronic device may perform other steps described in the above embodiments when the above one or more programs are executed by the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement any of the methods of dating as provided by the present disclosure.
According to one or more embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a note taking method as any one of the present disclosure provides.
The disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implements a note taking method as described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (15)
1. A method of recording events, comprising:
constructing an environment map in the augmented reality space;
acquiring note content information and storing the note content information to a note book;
and fixing the notepad in the environment map.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the story content information includes at least one of: text, pictures, audio, video, 3D objects or functional components.
3. The method of claim 1, wherein the obtaining noteworthy content information comprises at least one of:
acquiring first text information, and taking the first text information as the note content information;
Acquiring first audio information acquired by audio acquisition equipment, and taking the first audio information as the note content information;
acquiring second audio information acquired by audio acquisition equipment, converting the second audio information into second text information, and taking the second text information as the note content information;
acquiring image information acquired by image acquisition equipment, and taking the image information as the note content information;
acquiring a file downloaded from the Internet, and taking the file as the notepad content information;
acquiring a local file, and taking the local file as notepad content information; the method comprises the steps of,
and acquiring an application program component, and taking the application program component as notepad content information.
4. The method according to claim 1, wherein the method further comprises:
constructing a coordinate system in the augmented reality space, and binding the coordinate system with the environment map;
the fixing the notepad in the environment map comprises:
and determining the coordinate value of the notepad in the coordinate system, wherein the coordinate value of the notepad in the coordinate system is a fixed value.
5. The method as recited in claim 1, further comprising: and adjusting the position of the notepad in the environment map in response to the position adjustment operation of the notepad.
6. The method as recited in claim 1, further comprising:
determining key points of objects in the augmented reality space;
and if the distance between the notepad and the key point is smaller than or equal to a first set distance, moving the notepad to the key point.
7. The method of claim 6, wherein the environment map is a point cloud map, and wherein the determining keypoints of the object in the augmented reality space comprises:
determining point cloud points for describing the shape of the object based on the point cloud map;
and taking the point cloud point as a key point of the object.
8. The method of claim 1, wherein the notepad display state comprises an expanded state and a hidden state; the notepad is associated with an unfolding triggering condition and a hiding triggering condition; the method further comprises the steps of:
when the unfolding triggering condition is met, setting the display state of the notepad to be an unfolding state;
and when the hiding trigger condition is met, setting the display state of the notepad as a hiding state.
9. A method according to claim 2 or 3, wherein the noteworthy content information comprises audio associated with an automatic play trigger condition, the method further comprising:
When the automatic playing triggering condition is met, automatically playing the audio;
and/or the number of the groups of groups,
the note content information includes a video, the video being associated with an automatic play trigger condition, the method further comprising:
and when the automatic playing triggering condition is met, automatically playing the video.
10. The method as recited in claim 1, further comprising:
when the fact that the user looks at the notepad and the distance between the user and the notepad is larger than a second set distance is detected, the notepad is moved to a target area, and the target area takes the position of the user as the center and takes a third set distance as a radius area; the third set distance is less than the second set distance.
11. The method of claim 1, further comprising at least one of:
deleting the notepad in response to a deletion instruction of the notepad;
modifying the notepad content information in response to a modification instruction to the notepad content information in the notepad; the method comprises the steps of,
and responding to a display effect adjustment instruction of the notepad, and adjusting the display effect of the notepad.
12. A note taking apparatus, characterized by comprising:
the map construction module is used for constructing an environment map in the augmented reality space;
the acquisition module is used for acquiring the notepad content information and storing the notepad content information to a notepad;
and the fixing module is used for fixing the notepad in the environment map.
13. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-11.
14. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-11.
15. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of any of claims 1-11.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211194299.XA CN117834843A (en) | 2022-09-28 | 2022-09-28 | Recording method, recording device, electronic apparatus, recording medium, and computer program product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211194299.XA CN117834843A (en) | 2022-09-28 | 2022-09-28 | Recording method, recording device, electronic apparatus, recording medium, and computer program product |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117834843A true CN117834843A (en) | 2024-04-05 |
Family
ID=90521522
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211194299.XA Pending CN117834843A (en) | 2022-09-28 | 2022-09-28 | Recording method, recording device, electronic apparatus, recording medium, and computer program product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117834843A (en) |
-
2022
- 2022-09-28 CN CN202211194299.XA patent/CN117834843A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11348316B2 (en) | Location-based virtual element modality in three-dimensional content | |
US10553031B2 (en) | Digital project file presentation | |
US10055888B2 (en) | Producing and consuming metadata within multi-dimensional data | |
CN107850779B (en) | Virtual position anchor | |
KR102115634B1 (en) | Mixed reality experience sharing | |
CN112074797A (en) | System and method for anchoring virtual objects to physical locations | |
CN114503059A (en) | Automated eye-worn device sharing system | |
KR102499354B1 (en) | Electronic apparatus for providing second content associated with first content displayed through display according to motion of external object, and operating method thereof | |
US20230081605A1 (en) | Digital assistant for moving and copying graphical elements | |
US11354867B2 (en) | Environment application model | |
US11733783B2 (en) | Method and device for presenting a synthesized reality user interface | |
KR20220024827A (en) | Position synchronization of virtual and physical cameras | |
US20240187464A1 (en) | Synchronization in a Multiuser Experience | |
CN112912822A (en) | System for controlling audio-enabled connected devices in mixed reality environments | |
CN117834843A (en) | Recording method, recording device, electronic apparatus, recording medium, and computer program product | |
US20240248678A1 (en) | Digital assistant placement in extended reality | |
US11308716B1 (en) | Tailoring a computer-generated reality experience based on a recognized object | |
US11361473B1 (en) | Including a physical object based on context | |
US11989404B1 (en) | Time-based visualization of content anchored in time | |
US20240169603A1 (en) | Wearable device for displaying visual object for controlling virtual object and method thereof | |
CN117788758A (en) | Color extraction method, apparatus, electronic device, storage medium and computer program product | |
US20220366656A1 (en) | Method and Device for Generating a Map from a Photo Set | |
CN117075770A (en) | Interaction control method and device based on augmented reality, electronic equipment and storage medium | |
CN117999532A (en) | Parallel renderer for electronic device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |