WO2021129514A1 - 增强现实处理方法及装置、系统、存储介质和电子设备 - Google Patents

增强现实处理方法及装置、系统、存储介质和电子设备 Download PDF

Info

Publication number
WO2021129514A1
WO2021129514A1 PCT/CN2020/137279 CN2020137279W WO2021129514A1 WO 2021129514 A1 WO2021129514 A1 WO 2021129514A1 CN 2020137279 W CN2020137279 W CN 2020137279W WO 2021129514 A1 WO2021129514 A1 WO 2021129514A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual object
terminal
current frame
information
frame image
Prior art date
Application number
PCT/CN2020/137279
Other languages
English (en)
French (fr)
Inventor
曾凡涛
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP20906655.4A priority Critical patent/EP4080464A4/en
Publication of WO2021129514A1 publication Critical patent/WO2021129514A1/zh
Priority to US17/847,273 priority patent/US20220319136A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/024Multi-user, collaborative environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Definitions

  • the present disclosure relates to the field of augmented reality technology, and in particular, to an augmented reality processing method, an augmented reality processing device, an augmented reality processing system, a computer-readable storage medium, and electronic equipment.
  • Augmented Reality is a technology that integrates the virtual world and the real world. This technology has been widely used in education, games, medical care, the Internet of Things, intelligent manufacturing and other fields.
  • virtual object information can be shared among multiple terminals.
  • the terminal needs to input the room ID number that characterizes the scene to obtain the virtual object information, which increases the user's operation, and when there are multiple AR scenes (that is, there are multiple room ID numbers) , It will increase the user’s memory burden and is not smart enough.
  • an augmented reality processing method applied to a first terminal including: acquiring a current frame image collected by a camera module of the first terminal, extracting image parameters of the current frame image, and converting the image parameters Send to the cloud so that the cloud can use the pre-stored mapping results to determine the information of the virtual object corresponding to the image parameter; receive the information of the virtual object sent by the cloud and display the virtual object; respond to the editing operation for the virtual object, to the virtual object To edit.
  • an augmented reality processing method applied to the cloud, including: acquiring image parameters of the current frame image sent by a first terminal; using pre-stored mapping results to determine the difference with the current frame image The virtual object information corresponding to the image parameter; the virtual object information is sent to the first terminal to display the virtual object on the first terminal; the result of editing the virtual object by the first terminal is obtained and stored.
  • an augmented reality processing device which is applied to a first terminal, and includes: a parameter upload module for acquiring the current frame image collected by the camera module of the first terminal, and extracting the current frame image Image parameters, the image parameters are sent to the cloud, so that the cloud can use the pre-stored mapping results to determine the information of the virtual object corresponding to the image parameter; the virtual object acquisition module is used to receive the information of the virtual object sent by the cloud and display the virtual object. Object; the virtual object editing module is used to edit the virtual object in response to the editing operation on the virtual object.
  • an augmented reality processing device which is applied to the cloud, and includes: a parameter acquisition module for acquiring image parameters of a current frame image sent by a first terminal; a virtual object determination module for using The pre-stored mapping result determines the virtual object information corresponding to the image parameters of the current frame image; the virtual object sending module is used to send the virtual object information to the first terminal so as to display the virtual object on the first terminal; the editing result The obtaining module is used to obtain and store the result of editing the virtual object by the first terminal.
  • an augmented reality processing system including: a first terminal, configured to obtain a current frame image collected by a camera module, extract image parameters of the current frame image, and send the image parameters to the cloud; Obtain virtual object information sent by the cloud and display the virtual object; respond to the editing operation of the virtual object, edit the virtual object, and feed back the editing result to the cloud; the cloud is used to obtain image parameters; use pre-stored mapping results The virtual object information corresponding to the image parameter of the current frame image is determined, and the virtual object information is sent to the first terminal; the result of editing the virtual object by the first terminal is obtained and stored.
  • a computer-readable storage medium having a computer program stored thereon, and when the program is executed by a processor, any one of the above-mentioned augmented reality processing methods is realized.
  • an electronic device including a processor; a memory, configured to store one or more programs, and when the one or more programs are executed by the processor, the processor realizes any of the foregoing.
  • An augmented reality processing method including a processor; a memory, configured to store one or more programs, and when the one or more programs are executed by the processor, the processor realizes any of the foregoing.
  • FIG. 1 shows a schematic diagram of a system architecture for implementing multi-person AR according to an embodiment of the present disclosure
  • Figure 2 shows a schematic structural diagram of an electronic device suitable for implementing embodiments of the present disclosure
  • FIG. 3 schematically shows a flowchart of an augmented reality processing method according to an exemplary embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of an interface of the first terminal performing an augmented reality process in response to a user operation
  • FIG. 5 shows a schematic diagram of a first terminal displaying an editing sub-interface of a virtual object on the interface in response to a user's selection operation according to an embodiment of the present disclosure
  • FIG. 6 shows a schematic diagram of a first terminal displaying an editing sub-interface of a virtual object on the interface in response to a user's selection operation according to another embodiment of the present disclosure
  • FIG. 7 shows a schematic diagram of a first terminal moving a virtual object in response to a user's operation according to an embodiment of the present disclosure
  • FIG. 8 shows a schematic diagram of a first terminal adjusting the size of a virtual object in response to a user's operation according to an embodiment of the present disclosure
  • FIG. 9 shows a schematic diagram of a first terminal deleting a virtual object in response to a user's operation according to an embodiment of the present disclosure
  • FIG. 10 shows a schematic diagram of a first terminal adding a new virtual object in a scene in response to a user's virtual object adding operation according to an embodiment of the present disclosure
  • FIG. 11 shows a schematic diagram showing a selection sub-interface of virtual objects before and after editing on a third terminal according to an embodiment of the present disclosure
  • FIG. 12 schematically shows a flowchart of an augmented reality processing method according to another exemplary embodiment of the present disclosure
  • FIG. 13 schematically shows an interaction diagram of an augmented reality processing solution according to an exemplary embodiment of the present disclosure
  • FIG. 14 shows a schematic diagram of an effect of an augmented reality processing solution applying an exemplary embodiment of the present disclosure
  • FIG. 15 schematically shows a block diagram of an augmented reality processing device according to the first exemplary embodiment of the present disclosure
  • FIG. 16 schematically shows a block diagram of an augmented reality processing device according to a second exemplary embodiment of the present disclosure
  • FIG. 17 schematically shows a block diagram of an augmented reality processing apparatus according to a third exemplary embodiment of the present disclosure
  • FIG. 18 schematically shows a block diagram of an augmented reality processing apparatus according to a fourth exemplary embodiment of the present disclosure
  • FIG. 19 schematically shows a block diagram of an augmented reality processing apparatus according to a fifth exemplary embodiment of the present disclosure.
  • FIG. 20 schematically shows a block diagram of an augmented reality processing apparatus according to a sixth exemplary embodiment of the present disclosure
  • FIG. 21 schematically shows a block diagram of an augmented reality processing apparatus according to a seventh exemplary embodiment of the present disclosure
  • FIG. 22 schematically shows a block diagram of an augmented reality processing apparatus according to an eighth exemplary embodiment of the present disclosure.
  • FIG. 1 shows a schematic diagram of a system architecture of a multi-person AR that implements an embodiment of the present disclosure.
  • the multi-person AR system implementing the embodiments of the present disclosure may include a cloud 1000, a first terminal 1100, and a second terminal 1200.
  • the second terminal 1200 can generally be used as a terminal for mapping a scene.
  • the second terminal 1200 can configure virtual objects in the scene, and can use the constructed map
  • the information and virtual object information are sent to the cloud 1000 for maintenance.
  • the first terminal 1100 may be a terminal that performs relocation and obtains virtual object information from the cloud 1000.
  • the first terminal 1100 and the second terminal 1200 may be terminals capable of performing AR related processing, including but not limited to mobile phones, tablets, smart wearable devices, and the like.
  • the cloud 1000 may also be called a cloud server, which may be a single server or a server cluster composed of multiple servers.
  • the first terminal 1100 or the second terminal 1200 may be connected to the cloud 1000 through the medium of a communication link, and the medium of the communication link may include, for example, a wire, a wireless communication link, or an optical fiber cable.
  • the system may also include a third terminal, a fourth terminal, and other mobile terminals that communicate with the cloud 1000, and the present disclosure does not limit the number of terminals included in the system.
  • the first terminal 1100 can obtain the current frame image, extract the image parameters of the current frame image, and send the image parameters to the cloud 1000.
  • the cloud 1000 uses the pre-stored mapping results to determine the virtual object information corresponding to the image parameters of the current frame image , And send the virtual object information to the first terminal 1100.
  • the first terminal 1100 may display the virtual object, and respond to the editing operation on the virtual object, edit the virtual object, and feed back the editing result to the cloud 1000, which is stored and maintained by the cloud 1000.
  • the mapping result and virtual object information pre-stored in the cloud 1000 may be determined by the second terminal 1200 through a mapping process, for example.
  • the first terminal 1100 may also be a device for mapping
  • the second terminal 1200 may also be a device for relocating and acquiring virtual objects.
  • the first terminal 1100 may also be the second terminal 1200. That is to say, after the first terminal 1100 creates a map and configures virtual objects, when the first terminal 1100 is in the map creation scene again, it can obtain its pre-configured virtual Object.
  • the first terminal 1100 may include a camera module 1110, an inertial measurement unit 1120, a Simultaneous Localization And Mapping (SLAM) unit 1130, a multi-person AR unit 1140, and an application program 1150.
  • SLAM Simultaneous Localization And Mapping
  • the camera module 1110 can be used to collect video frame images, which are usually RGB images. During the execution of the following augmented reality processing, the camera module 1110 may be used to obtain the current frame image.
  • the inertial measurement unit 1120 may include a gyroscope and an accelerometer, which respectively measure the angular velocity and acceleration of the first terminal 1100, and thereby determine the inertial information of the first terminal 1100.
  • the instant positioning and map construction unit 1130 can be used to obtain the inertial information sent by the inertial measurement unit 1120 and the image sent by the camera module 1110, and perform a mapping or relocation process.
  • the multi-person AR unit 1140 can obtain the current frame image sent by the instant positioning and map construction unit 1130, and determine the image parameters of the current frame image.
  • the application program 1150 may send the determined image parameters to the cloud 1000.
  • the user can also use the application 1150 to configure the virtual object, and upload the configured virtual object information to the cloud 1000.
  • the first terminal 1100 may also include, for example, a depth sensing module (not shown) for collecting depth information of the scene, so as to further use the depth information to construct image parameters.
  • the depth sensing module may be a dual camera module, a structured light module, or a TOF (Time-Of-Flight, Time-Of-Flight) module. This disclosure does not make any special restrictions on this.
  • the second terminal 1200 may at least include a camera module 1210, an inertial measurement unit 1220, an instant positioning and map construction unit 1230, a multi-person AR unit 1240, and an application program 1250.
  • Fig. 2 shows a schematic diagram of an electronic device suitable for implementing exemplary embodiments of the present disclosure.
  • the electronic device may refer to the first terminal, the second terminal, the third terminal, etc. described in the present disclosure. It should be noted that the electronic device shown in FIG. 2 is only an example, and should not bring any limitation to the function and scope of use of the embodiments of the present disclosure.
  • the electronic device of the present disclosure includes at least a processor and a memory.
  • the memory is used to store one or more programs.
  • the processor can at least realize the application of the exemplary embodiments of the present disclosure to the first A terminal image processing method.
  • the electronic device 200 may include: a processor 210, an internal memory 221, an external memory interface 222, a Universal Serial Bus (USB) interface 230, a charging management module 240, and a power management module 241, battery 242, antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 271, receiver 272, microphone 273, earphone interface 274, sensor module 280, display screen 290, camera module 291 , Indicator 292, motor 293, button 294, Subscriber Identification Module (SIM) card interface 295, etc.
  • SIM Subscriber Identification Module
  • the sensor module 280 may include a depth sensor 2801, a pressure sensor 2802, a gyroscope sensor 2803, an air pressure sensor 2804, a magnetic sensor 2805, an acceleration sensor 2806, a distance sensor 2807, a proximity light sensor 2808, a fingerprint sensor 2809, a temperature sensor 2810, and a touch sensor. 2811, ambient light sensor 2812, bone conduction sensor 2813, etc.
  • the structure illustrated in the embodiment of the present application does not constitute a specific limitation on the electronic device 200.
  • the electronic device 200 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 210 may include one or more processing units.
  • the processor 210 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (Image Signal Processor, ISP), controller, video codec, digital signal processor (Digital Signal Processor, DSP), baseband processor and/or Neural-etwork Processing Unit (NPU), etc.
  • AP application processor
  • GPU graphics processing unit
  • ISP image Signal Processor
  • controller video codec
  • digital signal processor Digital Signal Processor
  • NPU Neural-etwork Processing Unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • a memory may be provided in the processor 210 to store instructions and data.
  • the USB interface 230 is an interface that complies with the USB standard specification, and specifically may be a MiniUSB interface, a MicroUSB interface, a USBTypeC interface, and so on.
  • the USB interface 230 can be used to connect a charger to charge the electronic device 200, and can also be used to transfer data between the electronic device 200 and peripheral devices. It can also be used to connect earphones and play audio through earphones. This interface can also be used to connect other electronic devices, such as AR devices.
  • the charging management module 240 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the power management module 241 is used to connect the battery 242, the charging management module 240, and the processor 210.
  • the power management module 241 receives input from the battery 242 and/or the charging management module 240, and supplies power to the processor 210, the internal memory 221, the display screen 290, the camera module 291, and the wireless communication module 260.
  • the wireless communication function of the electronic device 200 can be implemented by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor, and the baseband processor.
  • the mobile communication module 250 can provide a wireless communication solution including 2G/3G/4G/5G and the like applied to the electronic device 200.
  • the wireless communication module 260 can provide wireless local area networks (Wireless Local Area Networks, WLAN) (such as Wireless Fidelity (Wi-Fi) networks), Bluetooth (Bluetooth, BT), and global navigation satellites used on the electronic device 200.
  • WLAN Wireless Local Area Networks
  • Wi-Fi Wireless Fidelity
  • Bluetooth Bluetooth
  • BT Bluetooth
  • global navigation satellites used on the electronic device 200.
  • System Global Navigation Satellite System, GNSS
  • FM Frequency Modulation
  • NFC Near Field Communication
  • Infrared Technology Infrared, IR
  • the electronic device 200 implements a display function through a GPU, a display screen 290, an application processor, and the like.
  • the GPU is a microprocessor for image processing and is connected to the display screen 290 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
  • the electronic device 200 can implement a shooting function through an ISP, a camera module 291, a video codec, a GPU, a display screen 290, and an application processor.
  • the electronic device 200 may include 1 or N camera modules 291, and N is a positive integer greater than 1. If the electronic device 200 includes N cameras, one of the N cameras is the main camera.
  • the internal memory 221 may be used to store computer executable program code, where the executable program code includes instructions.
  • the internal memory 221 may include a storage program area and a storage data area.
  • the external memory interface 222 may be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 200.
  • the electronic device 200 can implement audio functions through an audio module 270, a speaker 271, a receiver 272, a microphone 273, a headphone interface 274, an application processor, and the like. For example, music playback, recording, etc.
  • the audio module 270 is used to convert digital audio information into an analog audio signal for output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module 270 can also be used to encode and decode audio signals.
  • the audio module 270 may be provided in the processor 210, or part of the functional modules of the audio module 270 may be provided in the processor 210.
  • the speaker 271 also called a "speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 200 can listen to music through the speaker 271, or listen to a hands-free call.
  • the microphone 273, also called “microphone” or “microphone”, is used to convert sound signals into electrical signals. When making a call or sending a voice message, the user can approach the microphone 273 through the mouth to make a sound, and input the sound signal to the microphone 273.
  • the electronic device 200 may be provided with at least one microphone 273.
  • the earphone interface 274 is used to connect wired earphones.
  • the depth sensor 2801 is used to obtain depth information of the scene.
  • the pressure sensor 2802 is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the gyro sensor 2803 may be used to determine the movement posture of the electronic device 200.
  • the air pressure sensor 2804 is used to measure air pressure.
  • the magnetic sensor 2805 includes a Hall sensor.
  • the electronic device 200 can use the magnetic sensor 2805 to detect the opening and closing of the flip holster.
  • the acceleration sensor 2806 can detect the magnitude of the acceleration of the electronic device 200 in various directions (generally three axes).
  • the distance sensor 2807 is used to measure distance.
  • the proximity light sensor 2808 may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the fingerprint sensor 2809 is used to collect fingerprints.
  • the temperature sensor 2810 is used to detect temperature.
  • the touch sensor 2811 may pass the detected touch operation to the application processor to determine the type of the touch event.
  • the visual output related to the touch operation can be provided through the display screen 290.
  • the ambient light sensor 2812 is used to sense the brightness of the ambient light.
  • the bone conduction sensor 2813 can acquire vibration signals.
  • the button 294 includes a power-on button, a volume button, and so on.
  • the button 294 may be a mechanical button. It can also be a touch button.
  • the motor 293 can generate vibration prompts. The motor 293 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • the indicator 292 can be an indicator light, which can be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 295 is used to connect to the SIM card.
  • the electronic device 200 interacts with the network through the SIM card to implement functions such as call and data communication.
  • the present application also provides a computer-readable storage medium.
  • the computer-readable storage medium may be included in the electronic device described in the above embodiment; or it may exist alone without being assembled into the electronic device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination of any of the above. More specific examples of computer-readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable removable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program, and the program may be used by or in combination with an instruction execution system, apparatus, or device.
  • the computer-readable storage medium can send, propagate, or transmit the program for use by or in combination with the instruction execution system, apparatus, or device.
  • the program code contained on the computer-readable storage medium can be transmitted by any suitable medium, including but not limited to: wireless, wire, optical cable, RF, etc., or any suitable combination of the above.
  • the computer-readable storage medium carries one or more programs, and when the above one or more programs are executed by an electronic device, the electronic device realizes the method described in the following embodiments.
  • each block in the flowchart or block diagram may represent a module, program segment, or part of the code, and the above-mentioned module, program segment, or part of the code contains one or more for realizing the specified logic function.
  • Executable instructions may also occur in a different order from the order marked in the drawings. For example, two blocks shown one after another can actually be executed substantially in parallel, and they can sometimes be executed in the reverse order, depending on the functions involved.
  • each block in the block diagram or flowchart, and the combination of blocks in the block diagram or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or operations, or can be implemented by It is realized by a combination of dedicated hardware and computer instructions.
  • the units described in the embodiments of the present disclosure may be implemented in software or hardware, and the described units may also be provided in a processor. Among them, the names of these units do not constitute a limitation on the unit itself under certain circumstances.
  • Fig. 3 schematically shows a flowchart of an augmented reality processing method applied to a first terminal according to an exemplary embodiment of the present disclosure.
  • the augmented reality processing method may include the following steps:
  • the first terminal uses its camera module to capture the current frame image, it can extract the two-dimensional feature point information of the current frame image as image parameters corresponding to the current frame image and send it to the cloud.
  • the image parameters of the current frame image may include two-dimensional feature point information and three-dimensional feature point information of the current frame image.
  • the two-dimensional feature point information of the current frame image can be extracted based on the combination of feature extraction algorithms and feature descriptors.
  • the feature extraction algorithms adopted by the exemplary embodiments of the present disclosure may include, but are not limited to, FAST feature point detection algorithm, DOG feature point detection algorithm, Harris feature point detection algorithm, SIFT feature point detection algorithm, SURF feature point detection algorithm, etc.
  • Feature descriptors may include, but are not limited to, BRIEF feature point descriptors, BRISK feature point descriptors, FREAK feature point descriptors, and so on.
  • the combination of the feature extraction algorithm and the feature descriptor may be the FAST feature point detection algorithm and the BRIEF feature point descriptor. According to other embodiments of the present disclosure, the combination of the feature extraction algorithm and the feature descriptor may be a DOG feature point detection algorithm and a FREAK feature point descriptor.
  • the depth information corresponding to the two-dimensional feature point information can be combined to determine the three-dimensional feature point information of the current frame image.
  • the depth information corresponding to the current frame image can be acquired through the depth sensing module.
  • the depth sensing module may be any one of a dual camera module (for example, a color camera and a telephoto camera), a structured light module, and a TOF module.
  • the current frame image and the corresponding depth information can be registered to determine the depth information of each pixel on the current frame image.
  • the internal and external parameters of the camera module and the depth sensing module need to be calibrated in advance.
  • the coordinate P_ir of the pixel point in the depth sensing module coordinate system can be obtained.
  • P_ir can be multiplied by a rotation matrix R and a translation vector T is added to convert P_ir to the coordinate system of the RGB camera to obtain P_rgb.
  • P_rgb can be multiplied with the internal parameter matrix H_rgb of the camera module to obtain p_rgb
  • p_rgb is also a three-dimensional vector, denoted as (x0, y0, z0), where x0 and y0 are the pixels of the pixel in the RGB image Coordinates, extract the pixel value of the pixel, and match it with the corresponding depth information.
  • the alignment of the two-dimensional image information and depth information of one pixel is completed. In this case, the above process is performed for each pixel to complete the registration process.
  • the depth information corresponding to the two-dimensional feature point information can be determined from it, and the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information can be combined to determine Get the three-dimensional feature point information of the current frame image.
  • the depth information can also be de-noised to remove obviously wrong depth values in the depth information.
  • a deep neural network may be used to remove noise in the TOF image, which is not particularly limited in this exemplary embodiment.
  • FIG. 4 shows a schematic diagram of acquiring a current frame image in response to a user operation.
  • the interface displays the confirmation option to execute the acquisition of the video frame image, that is, confirm "whether the camera is turned on to perform the AR process", and the user clicks "
  • the first terminal controls to turn on the camera module, collects the current frame image, and executes the above-mentioned process of extracting image parameters.
  • the first terminal may send the image parameter to the cloud, so that the cloud uses the pre-stored mapping result to determine the information of the virtual object corresponding to the image parameter.
  • the first terminal may also send the location information of the current scene to the cloud.
  • the first terminal may use any of the following systems to determine the location information of the current scene: Global Positioning System (GPS), Global Navigation Satellite System (GLONASS), Beidou navigation satellite system (Beidou aviation satellite system, BDS), Quasi-Zenith Satellite System (QZSS) and/or Satellite Based Augmentation Systems (SBAS).
  • GPS Global Positioning System
  • GLONASS Global Navigation Satellite System
  • Beidou navigation satellite system Beidou aviation satellite system, BDS
  • QZSS Quasi-Zenith Satellite System
  • SBAS Satellite Based Augmentation Systems
  • the cloud can determine one or more pre-built maps corresponding to the location information. It is easy to understand that the pre-built map information can correspond to its actual geographic location information.
  • the image parameters of the key frame image corresponding to the map are determined and matched with the image parameters of the current frame image sent by the first terminal to determine the corresponding virtual object information.
  • the key frame images of these maps are used as a search set to find a key frame image matching the current frame image of the first terminal, and then the corresponding virtual object information is determined.
  • the cloud can send a mapping prompt to the first terminal to remind the first terminal
  • the process of constructing a map for the current scene can be performed.
  • the user can perform a map creation operation according to the prompts and feed back the map result to the cloud.
  • the cloud can determine the search range of the pre-stored mapping results according to the location information of the first terminal, and use the results in the search range to determine the corresponding virtual object information. As a result, the problem of more pre-stored mapping results and longer search time is avoided.
  • mapping result is searched to determine the key frame image corresponding to the image parameters of the current frame image, and then the virtual object corresponding to the key frame image is determined to obtain the virtual object information corresponding to the current frame image of the first terminal .
  • the determined image matching the current frame image is recorded as the reference image, and the terminal that took the reference image is recorded as the second terminal.
  • the virtual object information placed on the first terminal is determined.
  • the pose of the current frame image relative to the second terminal can be determined, and the current frame image can be collected
  • the posture information of the first terminal in the image determines the relative posture relationship between the first terminal and the second terminal.
  • the relationship between the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image can be determined through feature matching or descriptor matching. If the two-dimensional feature point information of the current frame image is determined If the feature point information matches the two-dimensional feature point information of the reference image, the Iterative Closest Point (ICP) method can be used to determine the relative relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image. Posture relationship.
  • ICP Iterative Closest Point
  • the three-dimensional feature point information of the current frame image is the point cloud information corresponding to the current frame image
  • the three-dimensional feature point information of the reference image is the point cloud information of the reference image.
  • the two point cloud information can be used as input, and by inputting the specified pose as the initial value, the optimal relative pose after the alignment of the two point clouds is obtained by the iterative closest point method, that is, the three-dimensional feature of the current frame image is determined
  • the relative pose relationship between the point information and the three-dimensional feature point information of the reference image is determined.
  • the relationship between the two-dimensional information is determined first. Due to the determination of the two-dimensional information relationship, the method of feature matching or descriptor matching is usually used, and the process is simple. As a result, the entire matching process can be accelerated, while the accuracy is improved, the effect of troubleshooting in advance can also be achieved.
  • the exemplary embodiment of the present disclosure may also include a solution for removing mismatched points.
  • the RANSAC (Random Sample Consensus) method can be used to eliminate mismatched feature point information. Specifically, a certain number of matching pairs (for example, 7 pairs, 8 pairs, etc.) are randomly selected from the matching pairs between the two-dimensional feature points of the current frame image and the two-dimensional feature points of the reference image, and the selected matching pairs are calculated
  • the basic matrix or essential matrix between the current frame image and the reference image is based on the epipolar constraint. If a two-dimensional feature point is far from the corresponding epipolar line, for example, greater than a threshold, the two-dimensional feature can be considered The points are mismatched points. By iterating a certain number of random sampling processes, the random sampling result with the largest number of internal points is selected as the final matching result. On this basis, the mismatched feature point information can be eliminated from the three-dimensional feature point information of the current frame image.
  • the three-dimensional feature point information from which the mismatched feature point information is eliminated can be used to determine the pose of the current frame image relative to the second terminal.
  • the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image
  • the two-dimensional feature point information of the current frame image is matched with the three-dimensional feature point information of the reference image.
  • Information association to get point-to-point information can be used as input to solve the Perspective-n-Point (PnP) problem.
  • PnP Perspective-n-Point
  • PnP is a classic method in the field of machine vision, which can determine the relative pose between the camera and the object according to n feature points on the object. Specifically, the rotation matrix and translation vector between the camera and the object can be determined according to the n feature points on the object. In addition, n may be determined to be 4 or more, for example.
  • the relative pose relationship between the three-dimensional feature point information of the reference image and the three-dimensional feature point information obtained by combining the PnP solution result of the previous embodiment can be used as the iterative initial pose input.
  • the point method determines the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image to determine the pose of the current frame image relative to the second terminal. It is easy to see that in this embodiment, PnP is combined with ICP to improve the accuracy of determining the pose relationship.
  • the cloud may send the virtual object information to the first terminal.
  • the cloud before the cloud sends the virtual object information, the cloud can determine the acquisition authority of the virtual object.
  • the acquisition authority can be set by the second terminal configuring the virtual object.
  • the acquisition authority includes but Not limited to: only friends can get it, only the second terminal can get it, open to all devices, and so on.
  • the cloud executes the process of sending virtual object information to the first terminal; if the first terminal does not meet the acquisition permission, the cloud can send a permission error prompt to the first terminal to characterize the first terminal.
  • the terminal does not have the authority to obtain virtual objects. Or, if the first terminal does not satisfy the acquisition authority, no result is fed back to the first terminal.
  • the learned information is that there is no pre-configured virtual object in the current scene.
  • the cloud can obtain the identity of the first terminal. If the identity of the first terminal is in the identity white list configured by the second terminal, the first terminal meets the acquisition authority; if the identity of the first terminal is not in the identity white list, Then the first terminal is not satisfied with obtaining the authority.
  • the first terminal After acquiring the information of the virtual object, the first terminal parses out the information of the virtual object itself and the corresponding position information to display the virtual object. Thus, the user can see the virtual object through the screen of the first terminal.
  • the virtual object can be edited by the person who created the map through the mapping terminal, and the present disclosure does not limit the type, color, and size of the virtual object.
  • the editing operation for the virtual object may include at least one of the following: deleting the virtual object, moving the virtual object, rotating the virtual object, and modifying the attribute of the virtual object.
  • the attributes of the virtual object may include, but are not limited to, size, color, deformation direction and degree, etc.
  • it is also possible to perform a cutting operation on the virtual object for example, the virtual object is structurally divided into two or into multiple parts, and only one part is retained when displayed.
  • the editing sub-interface of the virtual object can be displayed in response to the selection operation for the virtual object, that is, the virtual object is selected.
  • the editing sub-interface may be independent of the terminal interface. Sub-interface.
  • edit the virtual object in response to the editing operation on the editing sub-interface, edit the virtual object.
  • the virtual object can be directly clicked, stretched, moved, etc. Realize the editing of virtual objects.
  • the camera module of the first terminal 51 shoots toward the real table 52.
  • a virtual object 53 is displayed on the screen of the first terminal 51.
  • the virtual object 53 is, for example, a virtual object.
  • the ball is, for example, a ball.
  • the editing sub-interface 500 may appear in the interface on the screen, and the user can click the move, Modify the attributes, delete and other buttons to further realize the corresponding editing functions.
  • the user can also display the editing sub-interface 500 by long-pressing the screen, double-clicking the screen, etc., which is not limited in this exemplary embodiment.
  • the virtual object can also be selected through the object list, thereby avoiding mutual occlusion between virtual objects or between virtual objects and real objects, which is not conducive to selection.
  • a virtual object 53 and a virtual object 61 are displayed on the screen of the first terminal 51.
  • the user can expand the object list 600 by clicking the hide button 601, and the object list 600 can include a thumbnail or logo (for example, text, etc.) of the virtual object, so that the user can trigger the display of the editing sub-interface 500 by clicking the thumbnail or logo. .
  • the object list 600 can be hidden to avoid occluding other objects in the interface.
  • FIG. 7 shows a schematic diagram of moving the virtual object.
  • the virtual object 53 can be moved from the position A to the position B by dragging the virtual object 53.
  • the user clicks the move button in the editing sub-interface 500 and clicks the area of the position B the effect of moving the virtual object 53 from the position A to the position B can be realized.
  • FIG. 8 shows a schematic diagram of editing the size of a virtual object.
  • the size modification sub-interface 801 is further displayed, and the user clicks the button corresponding to the increase in size. After that, the virtual object 53 can be enlarged, and then the virtual object 81 can be obtained.
  • the user can also directly adjust the size of the virtual object through the stretching operation for the virtual object.
  • Figure 9 shows a schematic diagram of deleting a virtual object. Specifically, in response to the user's click operation on the delete button in the editing sub-interface 500, a sub-interface 901 for confirming deletion is further displayed. Delete the virtual object 53 from the scene.
  • the first terminal can send the edited result to the cloud for cloud storage.
  • the present disclosure also provides a solution for adding new virtual objects in the scene.
  • the first terminal may respond to the virtual object adding operation, add a new virtual object in the scene where the first terminal is located, and send the new virtual object information to the cloud, and the cloud will compare the new virtual object information with the current
  • the map information of the scene where the first terminal is located matches, that is, the new virtual object information can be associated with the current map ID.
  • the application program is configured with an add object button. After the user clicks the add object button, the add object sub-interface 100 may appear on the interface of the first terminal 51. In addition, the user can also present the object adding sub-interface 100 through preset presentation rules (for example, double-tapping the screen, long-pressing the screen, etc.).
  • the user can select one or more virtual objects from existing objects to add to the scene, or edit one or more new virtual objects to add to the scene by themselves.
  • the existing objects may include virtual objects edited by the developer in advance, and downloaded to the first terminal when downloading the AR application, or the existing objects may be virtual objects shared by the developers on the network information.
  • existing objects also include virtual objects edited in the user's history, which are not limited in this exemplary embodiment.
  • a new virtual object 101 such as a virtual cup, can be added to the scene.
  • the edited virtual object information can be used to replace the virtual object information stored before the editing.
  • the cloud only stores the latest editing results, and deletes the editing results that may exist in the history.
  • the cloud can store both the virtual object information after editing and the virtual object information before editing.
  • the edited virtual object information and the virtual object information before editing can be sent to the third terminal 111 together.
  • the selection object sub-interface 110 can be displayed on the screen. Next, in response to the user's selection operation for different editing results, the virtual object actually displayed on the interface can be determined.
  • FIG. 12 schematically shows a flowchart of an augmented reality processing method applied to the cloud according to an exemplary embodiment of the present disclosure.
  • the augmented reality processing method may include the following steps:
  • step S122 to step S128 has been described in the above step S32 to step S36, and will not be repeated here.
  • step S1302 the second terminal maps the scene to obtain map information of the current scene; in step S1304, the second terminal responds to the user's operation of placing anchor point information and configures virtual objects in the scene. It can be understood that, In the same scenario, the virtual object is associated with the map information; in step S1306, the second terminal may upload the constructed map information and the virtual object information to the cloud.
  • step S1308 the first terminal obtains the current frame image collected by its camera module and extracts image parameters; in step S1310, the first terminal uploads the extracted image parameters to the cloud.
  • step S1312 the cloud uses the pre-stored mapping results including the map information uploaded by the second terminal to perform feature search and matching on the image parameters uploaded by the first terminal, and determine that they correspond to the image parameters uploaded by the first terminal In step S1314, the cloud sends the determined virtual object to the first terminal.
  • step S1316 the first terminal displays the virtual object and re-edits the virtual object in response to a user operation; in step S1318, the first terminal feeds back the result of the re-editing to the cloud.
  • step S1320 the cloud stores the re-edited result and matches it with the corresponding map information.
  • FIG. 14 shows a schematic diagram of an effect of applying the augmented reality processing solution of the present disclosure. Specifically, when a user arrives at a scene, he can use his mobile phone to open a multi-person AR application, and then obtain information about the current scene through the interface. As shown in FIG. 14, an introduction card 140 of "XX Former Residence" is displayed.
  • the solution of the present disclosure has a wide range of application scenarios, and for example, virtual descriptions of buildings, virtual display of restaurant evaluations, virtual navigation icons for indoor shopping malls can be placed to facilitate subsequent users to find them, and so on. This disclosure does not limit this.
  • the cloud searches for the mapping result that matches the current frame image.
  • the virtual object information is determined.
  • the user does not need to operate, and the corresponding virtual object information is intelligently matched, and the convenience is improved; on the other hand, compared to the need to enter the room ID number to obtain the virtual object
  • the solution of the disclosure can re-edit the pre-configured virtual objects, which enhances the interest of the multi-person AR experience; on the other hand, it can Use GPS and other means to locate the corresponding map information, so as to quickly search the image and determine the virtual object.
  • this exemplary embodiment also provides an augmented reality processing device applied to the first terminal.
  • FIG. 15 schematically shows a block diagram of an augmented reality processing apparatus applied to a first terminal according to an exemplary embodiment of the present disclosure.
  • the augmented reality processing device 15 applied to the first terminal according to an exemplary embodiment of the present disclosure may include a parameter upload module 151, a virtual object acquisition module 153, and a virtual object editing module 155.
  • the parameter uploading module 151 may be used to obtain the current frame image collected by the camera module of the first terminal, extract the image parameters of the current frame image, and send the image parameters to the cloud, so that the cloud can use the pre-stored mapping results to determine
  • the virtual object information corresponding to the image parameters the virtual object acquisition module 153 can be used to receive the virtual object information sent by the cloud and display the virtual object; the virtual object editing module 155 can be used to respond to the editing operation for the virtual object and perform the virtual object edit.
  • the augmented reality processing device applied to the first terminal when acquiring virtual object information, the user does not need to input the room ID number, but the cloud searches for the mapping result that matches the current frame image
  • the virtual object information is determined. After the current frame image is determined, the user does not need to operate, and the corresponding virtual object information is intelligently matched, and the convenience is improved; on the other hand, compared to the need to enter the room ID number to obtain the virtual object
  • the user does not need to memorize the room ID numbers of different scenes; on the other hand, the solution of the disclosure can re-edit the pre-configured virtual objects, which enhances the fun of the multi-person AR experience.
  • the virtual object editing module 155 may also feed back the editing result to the cloud.
  • the augmented reality processing device 16 may further include a location uploading module 161.
  • the location upload module 161 may be configured to execute: obtain location information of the scene where the first terminal is located; send the location information to the cloud, so that the cloud can determine the search range of the mapping result and The virtual object information corresponding to the image parameter is determined by using the mapping result in the search range.
  • the virtual object editing module 155 may be configured to execute: on the interface of the first terminal, in response to a selection operation on the virtual object, display the editing sub-interface of the virtual object ; In response to an editing operation on the editing sub-interface, edit the virtual object.
  • the type of editing of a virtual object includes at least one of deleting the virtual object, moving the virtual object, rotating the virtual object, and modifying the attribute of the virtual object.
  • the augmented reality processing device 17 may further include a virtual object adding module 171.
  • the virtual object adding module 171 may be configured to perform: in response to the virtual object adding operation, adding a new virtual object in the scene where the first terminal is located; and sending information about the new virtual object to the Cloud.
  • the image parameters include the two-dimensional feature point information and the three-dimensional feature point information of the current frame image.
  • the process of extracting the image parameters of the current frame image by the parameter uploading module 151 may be It is configured to perform: perform two-dimensional feature point extraction on the current frame image, determine two-dimensional feature point information of the current frame image; acquire depth information corresponding to the two-dimensional feature point information, and perform according to the two-dimensional feature point The information and the depth information corresponding to the two-dimensional feature point information determine the three-dimensional feature point information of the current frame image.
  • the parameter upload module 151 may be further configured to execute: obtain the depth information corresponding to the current frame image collected by the depth sensing module of the first terminal; The frame image is registered with the depth information corresponding to the current frame image, and the depth information of each pixel on the current frame image is determined; the depth information of each pixel on the current frame image is determined to be the same as the two Depth information corresponding to the dimensional feature point information; using the two-dimensional feature point information and the depth information corresponding to the two-dimensional feature point information to determine the three-dimensional feature point information of the current frame image.
  • this exemplary embodiment also provides an augmented reality processing device applied to the cloud.
  • FIG. 18 schematically shows a block diagram of an augmented reality processing device applied to the cloud according to an exemplary embodiment of the present disclosure.
  • the augmented reality processing device 18 applied to the cloud according to an exemplary embodiment of the present disclosure may include a parameter acquisition module 181, a virtual object determination module 183, a virtual object transmission module 185 and an editing result acquisition module 187.
  • the parameter acquisition module 181 may be used to acquire the image parameters of the current frame image sent by the first terminal; the virtual object determination module 183 may be used to determine the virtual object corresponding to the image parameters of the current frame image by using pre-stored mapping results.
  • Object information; the virtual object sending module 185 can be used to send virtual object information to the first terminal, so as to display the virtual object on the first terminal; the editing result acquisition module 187 can be used to acquire the result of editing the virtual object by the first terminal And store.
  • the augmented reality processing device applied to the cloud based on the exemplary embodiment of the present disclosure, on the one hand, when acquiring virtual object information, the user does not need to input the room ID number, but the cloud determines by searching the mapping result that matches the current frame image After the current frame image is determined, there is no need for the user to operate, and the corresponding virtual object information is intelligently matched, and the convenience is improved; on the other hand, compared to the solution that needs to enter the room ID number to obtain the virtual object In the solution of the present disclosure, the user does not need to memorize the room ID numbers of different scenes; on the other hand, the solution of the disclosure can re-edit the pre-configured virtual objects, which enhances the fun of the multi-person AR experience.
  • the virtual object determination module 183 may be configured to execute: obtain the location information of the scene where the first terminal is located; determine the search range of the mapping result corresponding to the location information; The mapping result within the search range determines the virtual object information corresponding to the image parameters of the current frame of image.
  • the virtual object determination module 183 may be configured to perform: filter out reference images matching the image parameters of the current frame image from the pre-stored mapping results, and determine the shooting The second terminal of the reference image; using the image parameters of the current frame image and the image parameters of the reference image to determine the pose of the current frame image relative to the second terminal; according to the current frame image Determine the relative pose relationship between the first terminal and the second terminal relative to the pose of the second terminal and the pose information of the first terminal when the current frame image is collected; The relative pose relationship between the terminal and the second terminal, combined with the virtual object information configured by the second terminal during map creation, determines the virtual object information corresponding to the image parameters of the current frame image.
  • the virtual object determination module 183 may be configured to execute: if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, use the iterative closest point The relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image is determined in a manner to obtain the pose of the current frame image relative to the second terminal.
  • the virtual object determining module 183 may be configured to perform: determining mismatched feature point information in the two-dimensional feature point information of the current frame image and the two-dimensional feature point information of the reference image Remove the mismatched feature point information from the three-dimensional feature point information of the current frame image, so as to determine the three-dimensional feature point information of the current frame image after removing the mismatched feature point information and the reference image The relative pose relationship between the three-dimensional feature point information after excluding the mismatched feature point information.
  • the virtual object determination module 183 may be configured to execute: if the two-dimensional feature point information of the current frame image matches the two-dimensional feature point information of the reference image, then the current The two-dimensional feature point information of the frame image is associated with the three-dimensional feature point information of the reference image to obtain point-pair information; the point-pair information is used to solve the perspective n-point problem, and the three-dimensional feature point information of the current frame image is combined The solution result determines the pose of the current frame image relative to the second terminal.
  • the virtual object determination module 183 may be configured to execute: determine the relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image according to the solution result; The relative pose relationship between the three-dimensional feature point information of the current frame image and the three-dimensional feature point information of the reference image determined according to the solution result is used as the initial pose input, and the current frame is determined by the iterative closest point method The relative pose relationship between the three-dimensional feature point information of the image and the three-dimensional feature point information of the reference image is used to determine the pose of the current frame image relative to the second terminal.
  • the augmented reality processing device 19 may further include a drawing prompt module 191.
  • the mapping prompt module 191 may be configured to execute: if there is no reference image matching the image parameters of the current frame image in the pre-stored mapping result, send the mapping to the first terminal.
  • the map prompt is used to prompt the first terminal to perform the map construction process of the scene in which it is located.
  • the virtual object sending module 185 may be configured to execute: determine the acquisition authority of the virtual object information; if the first terminal meets the acquisition authority, execute sending the virtual object information to The process of the first terminal; if the first terminal does not satisfy the acquisition authority, sending an authority error prompt to the first terminal.
  • the virtual object sending module 185 may be configured to execute: obtain the identity of the first terminal; if the identity of the first terminal is in the identity whitelist, the first terminal The terminal satisfies the acquisition authority; if the identity of the first terminal is not in the identity whitelist, the first terminal does not satisfy the acquisition authority.
  • the augmented reality processing device 20 may further include a first editing result processing module 201.
  • the first editing result processing module 201 may be configured to execute: after obtaining the result of editing the virtual object by the first terminal, replace the virtual object information before editing with the edited virtual object information .
  • the augmented reality processing device 21 may further include a second editing result processing module 211.
  • the second editing result processing module 211 may be configured to execute: when the virtual object information needs to be sent to the third terminal, the edited virtual object information and the virtual object information before editing are sent to all
  • the third terminal is used to respond to a virtual object selection operation at the third terminal to display the edited virtual object or the virtual object information before editing.
  • the augmented reality processing device 22 may further include a new virtual object matching module 2201.
  • the new virtual object matching module 2201 may be configured to execute: obtain the information of the new virtual object sent by the first terminal; and compare the information of the new virtual object with the map of the scene where the first terminal is located. Information matches.
  • the example embodiments described here can be implemented by software, or can be implemented by combining software with necessary hardware. Therefore, the technical solution according to the embodiments of the present disclosure can be embodied in the form of a software product, which can be stored in a non-volatile storage medium (which can be a CD-ROM, U disk, mobile hard disk, etc.) or on the network , Including several instructions to make a computing device (which may be a personal computer, a server, a terminal device, or a network device, etc.) execute the method according to the embodiments of the present disclosure.
  • a computing device which may be a personal computer, a server, a terminal device, or a network device, etc.
  • modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory.
  • the features and functions of two or more modules or units described above may be embodied in one module or unit.
  • the features and functions of a module or unit described above can be further divided into multiple modules or units to be embodied.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Architecture (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种增强现实处理方法、增强现实处理装置、增强现实处理系统、计算机可读存储介质和电子设备,涉及增强现实技术领域。该增强现实处理方法包括:获取第一终端的摄像模组采集的当前帧图像,提取当前帧图像的图像参数,将图像参数发送至云端,以便云端利用预存储的建图结果确定出与图像参数对应的虚拟对象的信息(S32);接收云端发送的虚拟对象的信息并展示虚拟对象(S34);响应针对虚拟对象的编辑操作,对虚拟对象进行编辑(S36)。实现了多人AR的过程中减少了用户的操作并可以增强多人AR体验的趣味性。

Description

增强现实处理方法及装置、系统、存储介质和电子设备
相关申请的交叉引用
本申请要求于2019年12月24日提交的申请号为201911348471.0、名称为“增强现实处理方法及装置、系统、存储介质和电子设备”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入本文。
技术领域
本公开涉及增强现实技术领域,具体而言,涉及一种增强现实处理方法、增强现实处理装置、增强现实处理系统、计算机可读存储介质和电子设备。
背景技术
增强现实(Augmented Reality,AR)是一种把虚拟世界和现实世界融合的技术,该技术已广泛应用到教育、游戏、医疗、物联网、智能制造等多个领域。
在多人AR的方案中,多个终端之间可以实现虚拟物体信息的共享。然而,在此过程中,终端需要输入表征场景的房间ID号来获取虚拟物体信息,这就增加了用户的操作,并且在存在多个AR场景(也就是存在多个房间ID号)的情况下,会加重用户的记忆负担,不够智能。
发明内容
根据本公开的第一方面,提供了一种增强现实处理方法,应用于第一终端,包括:获取第一终端的摄像模组采集的当前帧图像,提取当前帧图像的图像参数,将图像参数发送至云端,以便云端利用预存储的建图结果确定出与图像参数对应的虚拟对象的信息;接收云端发送的虚拟对象的信息并展示该虚拟对象;响应针对虚拟对象的编辑操作,对虚拟对象进行编辑。
根据本公开的第二方面,提供了一种增强现实处理方法,应用于云端,包括:获取第一终端发送的当前帧图像的图像参数;利用预存储的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息;将虚拟对象信息发送至第一终端,以便在第一终端上展示虚拟对象;获取第一终端对虚拟对象进行编辑的结果并存储。
根据本公开的第三方面,提供了一种增强现实处理装置,应用于第一终端,包括:参数上传模块,用于获取第一终端的摄像模组采集的当前帧图像,提取当前帧图像的图像参数,将图像参数发送至云端,以便云端利用预存储的建图结果确定出与图像参数对应的虚拟对象的信息;虚拟对象获取模块,用于接收云端发送的虚拟对象的信息并展示该虚拟对象;虚拟对象编辑模块,用于响应针对虚拟对象的编辑操作,对虚拟对象进行编辑。
根据本公开的第四方面,提供了一种增强现实处理装置,应用于云端,包括:参数获取模块,用于获取第一终端发送的当前帧图像的图像参数;虚拟对象确定模块,用于利用预存储的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息;虚拟对象发送模块,用于将虚拟对象信息发送至第一终端,以便在第一终端上展示虚拟对象;编辑结果获取模块,用于获取第一终端对虚拟对象进行编辑的结果并存储。
根据本公开的第五方面,提供了一种增强现实处理系统,包括:第一终端,用于获取摄像模组采集的当前帧图像,提取当前帧图像的图像参数,将图像参数发送至云端;获取云端发送的虚拟对象信息并展示虚拟对象;响应针对虚拟对象的编辑操作,对虚拟对象进行编辑,并将编辑的结果反馈给云端;云端,用于获取图像参数;利用预存储的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息,并将虚拟对象信息发送至第一终端;获取第一终端对虚拟对象进行编辑的结果并存储。
根据本公开的第六方面,提供了一种计算机可读存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述任一种的增强现实处理方法。
根据本公开的第七方面,提供了一种电子设备,包括处理器;存储器,用于存储一个或多个程序,当一个或多个程序被处理器执行时,使得所述处理器实现上述任一种的增强现实处理方法。
附图说明
图1示出了本公开实施例的实现多人AR的系统架构示意图;
图2示出了适于用来实现本公开实施例的电子设备的结构示意图;
图3示意性示出了根据本公开的一个示例性实施方式的增强现实处理方法的流程图;
图4示出了第一终端响应用户操作执行增强现实过程的界面示意图;
图5示出了本公开一个实施例的第一终端响应用户的选定操作在界面上展示虚拟对象的编辑子界面的示意图;
图6示出了本公开另一个实施例的第一终端响应用户的选定操作在界面上展示虚拟对象的编辑子界面的示意图;
图7示出了本公开一个实施例的第一终端响应用户的操作移动虚拟对象的示意图;
图8示出了本公开一个实施例的第一终端响应用户的操作调整虚拟对象尺寸的示意图;
图9示出了本公开一个实施例的第一终端响应用户的操作删除虚拟对象的示意图;
图10示出了本公开一个实施例的第一终端响应用户的虚拟对象添加操作在场景中添加新的虚拟对象的示意图;
图11示出了本公开一个实施例的第三终端上呈现编辑前和编辑后的虚拟对象的选择子界面的示意图;
图12示意性示出了根据本公开的另一个示例性实施方式的增强现实处理方法的流程图;
图13示意性示出了根据本公开的示例性实施方式的增强现实处理方案的交互图;
图14示出了应用本公开的示例性实施方式的增强现实处理方案的一种效果示意图;
图15示意性示出了根据本公开的第一示例性实施方式的增强现实处理装置的方框图;
图16示意性示出了根据本公开的第二示例性实施方式的增强现实处理装置的方框图;
图17示意性示出了根据本公开的第三示例性实施方式的增强现实处理装置的方框图;
图18示意性示出了根据本公开的第四示例性实施方式的增强现实处理装置的方框图;
图19示意性示出了根据本公开的第五示例性实施方式的增强现实处理装置的方框图;
图20示意性示出了根据本公开的第六示例性实施方式的增强现实处理装置的方框图;
图21示意性示出了根据本公开的第七示例性实施方式的增强现实处理装置的方框图;
图22示意性示出了根据本公开的第八示例性实施方式的增强现实处理装置的方框图。
具体实施方式
现在将参考附图更全面地描述示例实施方式。然而,示例实施方式能够以多种形式实施,且不应被理解为限于在此阐述的范例;相反,提供这些实施方式使得本公开将更加全面和完整,并将示例实施方式的构思全面地传达给本领域的技术人员。所描述的特征、结构或特性可以以任何合适的方式结合在一个或更多实施方式中。在下面的描述中,提供许多具体细节从而给出对本公开的实施方式的充分理解。然而,本领域技术人员将意识到,可以实践本公开的技术方案而省略所述特定细节中的一个或更多,或者可以采用其它的方法、组元、装置、步骤等。在其它情况下,不详细示出或描述公知技术方案以避免喧宾夺主而使得本公开的各方面变得模糊。
此外,附图仅为本公开的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现这些功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器装置和/或微控制器装置中实现这些功能实体。
附图中所示的流程图仅是示例性说明,不是必须包括所有的步骤。例如,有的步骤还可以分解,而有的步骤可以合并或部分合并,因此实际执行的顺序有可能根据实际情况改变。另外,下面所有的术语“第一”、“第二”、“第三”等仅是为了区分的目的,不应作为本公开内容的限制。
图1示出了实现本公开实施例的多人AR的系统架构示意图。
如图1所示,实现本公开实施例的多人AR的系统可以包括云端1000、第一终端1100和第二终端1200。在本公开的示例性描述中,通常可以将第二终端1200作为对场景进行建图的终端,在建图的过程中,第二终端1200可以在场景中配置虚拟对象,并可以将构建的地图信息以及虚拟对象信息发送至云端1000进行维护。第一终端1100可以是进行重定位并从云端1000获取虚拟对象信息的终端。
第一终端1100和第二终端1200可以是能够执行AR相关处理的终端,包括但不限于手机、平板、智能可穿戴设备等。云端1000又可被称为云服务器,其可以是单个服务器或由多个服务器组成的服务器集群。第一终端1100或第二终端1200可以通过通信链路的介质与云端1000进行连接,该通信链路的介质可以例如包括有线、无线通信链路或光纤电缆等。
另外,在实现多人AR的场景中,系统还可以包括第三终端、第四终端等与云端1000进行通信连接的移动终端,本公开对系统包括的终端数量不做限制。
第一终端1100可以获取当前帧图像,提取当前帧图像的图像参数,并将图像参数发送至云端1000,云端1000利用预存储的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息,并将虚拟对象信息发送至第一终端1100。接下来,第一终端1100可以展示虚拟对象,并响应针对该虚拟对象的编辑 操作,对该虚拟对象进行编辑,并将编辑结果反馈给云端1000,由云端1000存储并维护。
其中,云端1000预存储的建图结果以及虚拟对象信息可以由例如第二终端1200通过建图过程而确定出。另外,应当注意的是,在一些实例中,第一终端1100也可以是进行建图的设备,第二终端1200也可以是进行重定位获取虚拟对象的设备。此外,第一终端1100还可以就是第二终端1200,也就是说,第一终端1100进行建图并配置虚拟对象后,在第一终端1100再次处于建图场景时,可以获取其预先配置的虚拟对象。
参考图1,第一终端1100可以包括摄像模组1110、惯性测量单元1120、即时定位与地图构建(Simultaneous Localization And Mapping,SLAM)单元1130、多人AR单元1140和应用程序1150。
摄像模组1110可以用于采集视频帧图像,该视频帧图像通常为RGB图像。在执行下述增强现实处理过程中,摄像模组1110可以用于获取当前帧图像。
惯性测量单元1120可以包括陀螺仪和加速度计,分别测量第一终端1100的角速度和加速度,进而确定出第一终端1100的惯性信息。
即时定位与地图构建单元1130可以用于获取惯性测量单元1120发送的惯性信息以及由摄像模组1110发送的图像,执行建图或重定位过程。
多人AR单元1140可以获取由即时定位与地图构建单元1130发送的当前帧图像,并确定当前帧图像的图像参数。
在本公开实施例中,应用程序1150可以将确定出的图像参数发送至云端1000。另外,在第一终端1100用于配置虚拟对象的实例中,用户还可以利用该应用程序1150配置虚拟对象,并将配置的虚拟对象信息上传至云端1000。
此外,第一终端1100还可以例如包括深度感测模组(未示出),用于采集场景的深度信息,以便进一步利用该深度信息构建出图像参数。具体的,深度感测模组可以是双摄模组、结构光模组或TOF(Time-Of-Flight,飞行时间测距)模组。本公开对此不做特殊限制。
类似地,第二终端1200可以至少包括摄像模组1210、惯性测量单元1220、即时定位与地图构建单元1230、多人AR单元1240和应用程序1250。
图2示出了适于用来实现本公开示例性实施方式的电子设备的示意图。具体的,该电子设备可以指代本公开所述的第一终端、第二终端、第三终端等。需要说明的是,图2示出的电子设备仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
本公开的电子设备至少包括处理器和存储器,存储器用于存储一个或多个程序,当一个或多个程序被处理器执行时,使得处理器可以至少实现本公开示例性实施方式的应用于第一终端的图像处理方法。
具体的,如图2所示,电子设备200可以包括:处理器210、内部存储器221、外部存储器接口222、通用串行总线(Universal Serial Bus,USB)接口230、充电管理模块240、电源管理模块241、电池242、天线1、天线2、移动通信模块250、无线通信模块260、音频模块270、扬声器271、受话器272、麦克风273、耳机接口274、传感器模块280、显示屏290、摄像模组291、指示器292、马达293、按键294以及用户标识模块(Subscriber Identification Module,SIM)卡接口295等。其中传感器模块280可以包括深度传感器2801、压力传感器2802、陀螺仪传感器2803、气压传感器2804、磁传感器2805、加速度传感器2806、距离传感器2807、接近光传感器2808、指纹传感器2809、温度传感器2810、触摸传感器2811、环境光传感器2812及骨传导传感器2813等。
可以理解的是,本申请实施例示意的结构并不构成对电子设备200的具体限定。在本申请另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件、软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(Application Processor,AP)、调制解调处理器、图形处理器(Graphics Processing Unit,GPU)、图像信号处理器(Image Signal Processor,ISP)、控制器、视频编解码器、数字信号处理器(Digital Signal Processor,DSP)、基带处理器和/或神经网络处理器(Neural-etwork Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。另外,处理器210中还可以设置存储器,用于存储指令和数据。
USB接口230是符合USB标准规范的接口,具体可以是MiniUSB接口,MicroUSB接口,USBTypeC接口等。USB接口230可以用于连接充电器为电子设备200充电,也可以用于电子设备200与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他电子设备,例如AR设备等。
充电管理模块240用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。电源管理模块241用于连接电池242、充电管理模块240与处理器210。电源管理模块241接收 电池242和/或充电管理模块240的输入,为处理器210、内部存储器221、显示屏290、摄像模组291和无线通信模块260等供电。
电子设备200的无线通信功能可以通过天线1、天线2、移动通信模块250、无线通信模块260、调制解调处理器以及基带处理器等实现。
移动通信模块250可以提供应用在电子设备200上的包括2G/3G/4G/5G等无线通信的解决方案。
无线通信模块260可以提供应用在电子设备200上的包括无线局域网(Wireless Local Area Networks,WLAN)(如无线保真(Wireless Fidelity,Wi-Fi)网络)、蓝牙(Bluetooth,BT)、全球导航卫星系统(Global Navigation Satellite System,GNSS)、调频(Frequency Modulation,FM)、近距离无线通信技术(Near Field Communication,NFC)、红外技术(Infrared,IR)等无线通信的解决方案。
电子设备200通过GPU、显示屏290及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏290和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
电子设备200可以通过ISP、摄像模组291、视频编解码器、GPU、显示屏290及应用处理器等实现拍摄功能。在一些实施例中,电子设备200可以包括1个或N个摄像模组291,N为大于1的正整数,若电子设备200包括N个摄像头,N个摄像头中有一个是主摄像头。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。内部存储器221可以包括存储程序区和存储数据区。外部存储器接口222可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备200的存储能力。
电子设备200可以通过音频模块270、扬声器271、受话器272、麦克风273、耳机接口274及应用处理器等实现音频功能。例如音乐播放、录音等。
音频模块270用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块270还可以用于对音频信号编码和解码。在一些实施例中,音频模块270可以设置于处理器210中,或将音频模块270的部分功能模块设置于处理器210中。
扬声器271,也称“喇叭”,用于将音频电信号转换为声音信号。电子设备200可以通过扬声器271收听音乐,或收听免提通话。受话器272,也称“听筒”,用于将音频电信号转换成声音信号。当电子设备200接听电话或语音信息时,可以通过将受话器272靠近人耳接听语音。麦克风273,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风273发声,将声音信号输入到麦克风273。电子设备200可以设置至少一个麦克风273。耳机接口274用于连接有线耳机。
针对电子设备200包括的传感器,深度传感器2801用于获取景物的深度信息。压力传感器2802用于感受压力信号,可以将压力信号转换成电信号。陀螺仪传感器2803可以用于确定电子设备200的运动姿态。气压传感器2804用于测量气压。磁传感器2805包括霍尔传感器。电子设备200可以利用磁传感器2805检测翻盖皮套的开合。加速度传感器2806可检测电子设备200在各个方向上(一般为三轴)加速度的大小。距离传感器2807用于测量距离。接近光传感器2808可以包括例如发光二极管(LED)和光检测器,例如光电二极管。指纹传感器2809用于采集指纹。温度传感器2810用于检测温度。触摸传感器2811可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏290提供与触摸操作相关的视觉输出。环境光传感器2812用于感知环境光亮度。骨传导传感器2813可以获取振动信号。
按键294包括开机键,音量键等。按键294可以是机械按键。也可以是触摸式按键。马达293可以产生振动提示。马达293可以用于来电振动提示,也可以用于触摸振动反馈。指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口295用于连接SIM卡。电子设备200通过SIM卡和网络交互,实现通话以及数据通信等功能。
本申请还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。
计算机可读存储介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读存储介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:无 线、电线、光缆、RF等等,或者上述的任意合适的组合。
计算机可读存储介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现如下述实施例中所述的方法。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。
图3示意性示出了本公开的示例性实施方式的应用于第一终端的增强现实处理方法的流程图。参考图3,该增强现实处理方法可以包括以下步骤:
S32.获取第一终端的摄像模组采集的当前帧图像,提取当前帧图像的图像参数,将图像参数发送至云端,以便云端利用预存储的建图结果确定出与图像参数对应的虚拟对象的信息。
根据本公开的一些实施例,第一终端在利用其摄像模组采集当前帧图像后,可以提取当前帧图像的二维特征点信息作为与当前帧图像对应的图像参数,发送至云端。
为了更准确地表达当前场景所包含的信息,在本公开的另一些实施例,当前帧图像的图像参数可以包含当前帧图像的二维特征点信息和三维特征点信息。
可以基于特征提取算法和特征描述子的组合来提取当前帧图像的二维特征点信息。本公开示例性实施方式采用的特征提取算法可以包括但不限于FAST特征点检测算法、DOG特征点检测算法、Harris特征点检测算法、SIFT特征点检测算法、SURF特征点检测算法等。特征描述子可以包括但不限于BRIEF特征点描述子、BRISK特征点描述子、FREAK特征点描述子等。
根据本公开的一个实施例,特征提取算法和特征描述子的组合可以是FAST特征点检测算法和BRIEF特征点描述子。根据本公开的另一些实施例,特征提取算法和特征描述子的组合可以是DOG特征点检测算法和FREAK特征点描述子。
应当理解的是,还可以针对不同纹理场景采用不同的组合形式,例如,针对强纹理场景,可以采用FAST特征点检测算法和BRIEF特征点描述子来进行特征提取;针对弱纹理场景,可以采用DOG特征点检测算法和FREAK特征点描述子来进行特征提取。
在确定出当前帧图像的二维特征点信息的情况下,可以结合二维特征点信息对应的深度信息,确定当前帧图像的三维特征点信息。
具体的,在获取当前帧图像时,可以通过深度感测模组采集与当前帧图像对应的深度信息。其中,深度感测模组可以是双摄模组(例如,彩色摄像头与长焦摄像头)、结构光模组、TOF模组中的任意一个。
在得到当前帧图像以及对应的深度信息后,可以将当前帧图像与深度信息进行配准,确定当前帧图像上各像素点的深度信息。
针对配准的过程,需要预先标定摄像头模组与深度感测模组的内参和外参。
具体的,可以构建一个三维向量p_ir=(x,y,z),其中,x,y表示一像素点的像素坐标,z表示该像素点的深度值。利用深度感测模组的内参矩阵可以得到该像素点在深度感测模组坐标系下的坐标P_ir。然后,P_ir可以与一个旋转矩阵R相乘,再加上一个平移向量T,即可将P_ir转换到RGB摄像头的坐标系下,得到P_rgb。随后,P_rgb可以与摄像头模组的内参矩阵H_rgb相乘,得到p_rgb,p_rgb也是一个三维向量,记为(x0,y0,z0),其中,x0和y0即为该像素点在RGB图像中的像素坐标,提取该像素点的像素值,与对应的深度信息进行匹配。由此,完成了一个像素点的二维图像信息与深度信息的对齐。在这种情况下,针对每一个像素点均执行上述过程,以完成配准过程。
在确定出当前帧图像上各像素点的深度信息后,可以从中确定出与二维特征点信息对应的深度信息,并将二维特征点信息与二维特征点信息对应的深度信息结合,确定出当前帧图像的三维特征点信息。
另外,在获取由深度感测模组的深度信息后,还可以对深度信息进行去噪,以去除深度信息中明显错误的深度值。例如,可以采用深度神经网络去除TOF图像中的噪点,本示例性实施方式中对此不做特殊限定。
图4示出了响应用户操作执行获取当前帧图像的示意图。参考图4,在用户点击界面上“AR”应用程序的图标后,进入该应用程序,界面出现执行获取视频帧图像的确认选项,即确认“是否开启摄像头来执行AR过程”,在用户点击“是”后,第一终端控制开启摄像模组,采集当前帧图像并执行上述提取图像参数的过程。
鉴于执行AR过程会消耗终端的资源并考虑到一些场景是否合适进行拍摄的情况,提供上述确认的过程,以供用户选择。
在确定出当前帧图像的图像参数后,第一终端可以将该图像参数发送至云端,以便云端利用预存储的建图结果确定出与图像参数对应的虚拟对象的信息。
下面对云端确定对应虚拟对象信息的过程进行说明。
根据本公开的一些实施例,第一终端还可以将当前所处场景的位置信息发送给云端。具体的,第一终端可以利用下述任一种系统来确定出当前所处场景的位置信息:全球卫星定位系统(Global Positioning System,GPS),全球导航卫星系统(Global Navigation Satellite System,GLONASS),北斗卫星导航系统(Beidou avigation satellite system,BDS),准天顶卫星系统(Quasi-Zenith Satellite System,QZSS)和/或星基增强系统(Satellite Based Augmentation Systems,SBAS)。
云端在获取到位置信息后,可以确定出该位置信息对应的一个或多个预先构建的地图。容易理解的是,预先构建的地图信息可以与其实际地理位置信息对应。
如果仅存在一个对应的地图,则确定该地图对应的关键帧图像的图像参数,并与第一终端发送的当前帧图像的图像参数进行匹配,以确定出对应的虚拟对象信息。
如果存在多个对应的地图,则将这些地图的关键帧图像作为搜索集合,查找到与第一终端的当前帧图像匹配的关键帧图像,进而确定出对应的虚拟对象信息。
如果不存在对应的地图,也就是说,云端预存储的建图结果中不存在与当前帧图像的图像参数匹配的参考图像,则云端可以向第一终端发送建图提示,以提示第一终端可以执行对当前所处场景的地图构建过程,在这种情况下,用户可以根据提示进行建图操作,并将建图结果反馈给云端。
也就是说,在这些实施例中,云端可以根据第一终端的位置信息确定出预存储的建图结果的搜索范围,利用该搜索范围内的结果确定出对应的虚拟对象信息。由此,避免了预存储的建图结果较多,搜索耗时较长的问题。
根据本公开的另一些实施例,在云端计算资源充足或预存储的建图结果数量不多的情况下,第一终端将当前帧图像的图像参数发送给云端后,云端可以直接利用预存储的建图结果进行搜索,以确定出与当前帧图像的图像参数对应的关键帧图像,进而确定出与该关键帧图像对应的虚拟对象,以得到与第一终端的当前帧图像对应的虚拟对象信息。
针对云端确定对应的虚拟对象信息的过程,将确定出的与当前帧图像匹配的图像记为参考图像,并将拍摄参考图像的终端记为第二终端。根据第一终端与第二终端的相对位姿关系以及第二终端在建图时配置的虚拟对象信息,确定出放置于第一终端的虚拟对象信息。
就确定第一终端与第二终端的相对位姿关系而言,可以基于当前帧图像的图像参数与参考图像的图像参数,确定当前帧图像相对于第二终端的位姿,并利用采集当前帧图像时第一终端的姿态信息,确定出第一终端与第二终端的相对位姿关系。
下面对确定当前帧图像相对于第二终端的位姿的过程进行说明。
根据本公开的一个实施例,可以通过特征匹配或描述子匹配的方式,确定当前帧图像的二维特征点信息与参考图像的二维特征点信息的关系,如果确定出当前帧图像的二维特征点信息与参考图像的二维特征点信息匹配,则可以采用迭代最近点(Iterative Closest Point,ICP)的方式确定当前帧图像的三维特征点信息与参考图像的三维特征点信息之间的相对位姿关系。
具体的,当前帧图像的三维特征点信息即是当前帧图像对应的点云信息,参考图像的三维特征点信息即是参考图像的点云信息。可以将此两个点云信息作为输入,通过输入指定的位姿作为初始值,利用迭代最近点的方式得到两个点云对齐后的最优相对位姿,即确定出当前帧图像的三维特征点信息与参考图像的三维特征点信息之间的相对位姿关系。由此,基于第二终端在获取参考图像时的姿态信息,可以确定出当前帧图像相对于第二终端的位姿。
应当理解的是,在进行点云匹配之前,先确定二维信息之间的关系,由于二维信息关系的确定,通常采用的是特征匹配或描述子匹配的方式,过程简单。由此,可以加速匹配的整个过程,提高精度的同时,也可以实现提前排错的效果。
另外,在上述二维特征点信息的匹配过程中,由于特征及描述子的问题,可能存在误匹配的问题。由此,本公开示例性实施方式还可以包括去除误匹配点的方案。
可以采用RANSAC(Random Sample Consensus,随机抽样一致性)方式剔除误匹配特征点信息。 具体的,在当前帧图像的二维特征点与参考图像的二维特征点之间的匹配对中随机选取一定数量的匹配对(例如,7对、8对等),通过选取的匹配对计算当前帧图像与参考图像之间的基本矩阵或本质矩阵,基于极线约束的方式,如果一个二维特征点离对应的极线距离较远,例如,大于一阈值,则可以认为该二维特征点为误匹配点。通过迭代一定次数的随机取样过程,选取内点个数最多的一次随机取样结果作为最终的匹配结果,在此基础上,可以从当前帧图像的三维特征点信息中剔除误匹配特征点信息。
由此,可以利用剔除误匹配特征点信息的三维特征点信息确定出当前帧图像相对于第二终端的位姿。
根据本公开的另一个实施例,首先,如果当前帧图像的二维特征点信息与参考图像的二维特征点信息匹配,则将当前帧图像的二维特征点信息与参考图像的三维特征点信息关联,以得到点对信息。接下来,可以将该点对信息作为输入,求解透视n点(Perspective-n-Point,PnP)问题,根据当前帧图像的三维特征点信息并结合求解结果确定当前帧图像相对于第二终端的位姿。
其中,PnP是机器视觉领域的经典方法,可以根据物体上的n个特征点来确定摄像头与物体间的相对位姿。具体可以根据物体上的n个特征点来确定摄像头与物体间的旋转矩阵和平移向量。另外,可以例如将n确定为大于等于4。
根据本公开的又一个实施例,可以将上一实施例结合PnP的求解结果而得到的三维特征点信息与参考图像的三维特征点信息的相对位姿关系作为迭代初始位姿输入,利用迭代最近点方式确定当前帧图像的三维特征点信息以及参考图像的三维特征点信息之间的相对位姿关系,以确定出当前帧图像相对于第二终端的位姿。容易看出,在本实施例是将PnP与ICP结合,提高位姿关系确定的准确性。
在确定出与当前帧图像的图像参数对应的虚拟对象信息后,云端可以将虚拟对象信息发送至第一终端。
根据本公开的一些实施例,在云端发送虚拟对象信息之前,云端可以确定该虚拟对象的获取权限,通常,获取权限可以由配置该虚拟对象的第二终端设定,例如,该获取权限包括但不限于:只有好友才能获取、只有第二终端自身才能获取、对所有设备开放等等。
如果第一终端满足该获取权限,则云端执行将虚拟对象信息发送至第一终端的过程;如果第一终端不满足该获取权限,则云端可以向第一终端发送权限错误提示,以表征第一终端不具备获取虚拟对象权限。或者,如果第一终端不满足该获取权限,则不向第一终端反馈任何结果,针对第一终端而言,获知的信息是,当前场景未存在预配置的虚拟对象。
具体的,云端可以获取第一终端的标识,如果第一终端的标识在第二终端配置的标识白名单内,则第一终端满足获取权限;如果第一终端的标识不在该标识白名单内,则第一终端不满足获取权限。
S34.接收云端发送的虚拟对象的信息并展示该虚拟对象。
第一终端在获取到虚拟对象的信息后,解析出虚拟对象自身信息以及对应位置信息,以展示出虚拟对象。由此,用户可以通过第一终端的屏幕看到该虚拟对象。
容易理解的是,虚拟对象可以由建图的人员通过建图终端编辑出,本公开对虚拟对象的种类、色彩、尺寸均不做限制。
S36.响应针对虚拟对象的编辑操作,对虚拟对象进行编辑。
在本公开的示例性实施方式中,针对虚拟对象的编辑操作可以包括以下至少一种:删除虚拟对象、移动虚拟对象、旋转虚拟对象、修改虚拟对象的属性。其中,虚拟对象的属性可以包括但不限于尺寸、色彩、形变方向及程度等。另外,还可以对虚拟对象进行切割操作,例如,将虚拟对象在结构上一分为二或切分为多个部分,显示时仅保留一个部分。这些均属于对虚拟对象的编辑操作。
根据本公开的一些实施例,在第一终端的界面上,可以响应针对虚拟对象的选定操作,即选定虚拟对象,展示虚拟对象的编辑子界面,该编辑子界面可以是独立于终端界面的子界面。接下来,响应在编辑子界面上的编辑操作,对虚拟对象进行编辑。
根据本公开的另一些实施例,区别于上述编辑子界面的方案或者与上述编辑子界面相结合,在第一终端的界面上,可以直接对虚拟对象进行点击、拉伸、移动等操作,来实现对虚拟对象的编辑。
下面将参考图5至图9对一些编辑的方式进行示例性说明。
参考图5,第一终端51的摄像模组朝向真实的桌子52拍摄,在执行上述确定虚拟对象的过程后,在第一终端51的屏幕上展示有虚拟对象53,该虚拟对象53例如为虚拟皮球。
在用户点击屏幕上的虚拟对象53时,也就是说,用户进行了针对虚拟对象的选定操作,在屏幕上的界面中可以出现编辑子界面500,用户可以点击编辑子界面500中的移动、修改属性、删除等按钮,以进一步实现对应的编辑功能。
此外,用户还可以通过长按屏幕、双击屏幕等操作,来呈现出编辑子界面500,本示例性实施方式中对此不做限定。
在界面中存在至少一个虚拟对象的情况下,还可以通过对象列表选定虚拟对象,由此,避免了虚拟 对象之间或虚拟对象与真实对象之间相互遮挡,不利于选定的情况。
参考图6,在第一终端51的屏幕上展示有虚拟对象53和虚拟对象61。用户可以通过点击隐藏按钮601来展开对象列表600,该对象列表600中可以包含虚拟对象的缩小图或标识(例如,文字等),以便用户通过点击该缩小图或标识来触发显示编辑子界面500。初始时,可以隐藏对象列表600,以避免遮挡界面中的其他对象。
图7示出了移动虚拟对象的示意图,在一个实例中,可以通过拖拽虚拟对象53的方式,将虚拟对象53从位置A移动到位置B。在另一个实例中,在用户点击编辑子界面500中的移动按钮后,点击位置B的区域,即可实现将虚拟对象53从位置A移动到位置B的效果。
图8示出了编辑虚拟对象尺寸的示意图,在这个实施例中,响应用户针对编辑子界面500中修改属性按钮的点击操作,进一步显示出尺寸修改子界面801,在用户点击增加尺寸对应的按钮后,可以放大虚拟对象53,进而得到虚拟对象81。
另外,用户还可以直接通过针对虚拟对象的拉伸操作,来实现虚拟对象的尺寸调整。
图9示出了删除虚拟对象的示意图,具体的,响应用户针对编辑子界面500中删除按钮的点击操作,进一步显示出确认删除的子界面901,在用户点击确认删除的按钮后,可以在当前场景中删除虚拟对象53。
应当理解的是,上面附图对编辑操作的说明仅是示例性的,不应作为本公开内容的限制。
第一终端可以将编辑的结果发送给云端,以便云端存储。
除上述针对已有虚拟对象的编辑操作外,本公开还提供了一种在场景中添加新的虚拟对象的方案。具体的,第一终端可以响应虚拟对象添加操作,在第一终端所处场景中添加新的虚拟对象,并将该新的虚拟对象的信息发送至云端,云端将该新的虚拟对象信息与当前第一终端所处场景的地图信息匹配,也就是说,新的虚拟对象信息可以与当前地图ID关联。
参考图10,应用程序配置有添加对象按钮,在用户点击添加对象按钮后,第一终端51的界面上可以出现添加对象子界面100。另外,用户也可以通过预设的呈现规则(例如,双击屏幕、长按屏幕等),来呈现出添加对象子界面100。
基于添加对象子界面100,用户可以从已有对象中选择一个或多个虚拟对象添加到场景中,也可以自行编辑一个或多个新的虚拟对象添加到场景中。在这种情况下,已有对象可以包括开发人员预先编辑好的虚拟对象,并在下载AR应用程序时一并下载到第一终端,或者,已有对象可以是网络中开发人员共享的虚拟对象信息。另外,已有对象还包括是用户历史上编辑好的虚拟对象,本示例性实施方式中对此不做限定。
如图10所示,在响应用户选择已有对象或自行编辑对象的情况下,可以在场景中添加新的虚拟对象101,例如,虚拟杯子。
根据本公开的一些实施例,云端在获取到编辑后的虚拟对象信息后,可以利用编辑后的虚拟对象信息替换编辑前存储的虚拟对象信息。也就是说,云端仅存储有最新的编辑结果,删除历史上可能存在的编辑结果。
根据本公开的另一些实施例,云端可以同时存储编辑后的虚拟对象信息和编辑前的虚拟对象信息。
参考图11,云端在需要将虚拟对象信息发送至第三终端111时,可以将编辑后的虚拟对象信息以及编辑前的虚拟对象信息一并发送至第三终端111,在第三终端111的界面上可以显示出选择对象子界面110。接下来,可以响应用户针对不同编辑结果的选择操作,确定出实际在界面上展示的虚拟对象。
图12示意性示出了本公开的示例性实施方式的应用于云端的增强现实处理方法的流程图。参考图12,该增强现实处理方法可以包括以下步骤:
S122.获取第一终端发送的当前帧图像的图像参数;
S124.利用预存储的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息;
S126.将虚拟对象信息发送至第一终端,以便在第一终端上展示虚拟对象;
S128.获取第一终端对虚拟对象进行编辑的结果并存储。
步骤S122至步骤S128的过程,在上述步骤S32至步骤S36中已进行说明,在此不再赘述。
下面将参考图13对本公开示例性实施方式的增强现实处理方案的交互过程进行说明。
在步骤S1302中,第二终端对场景进行建图,得到当前场景的地图信息;在步骤S1304中,第二终端响应用户放置锚点信息的操作,在场景中配置虚拟对象,可以理解的是,同一场景下,虚拟对象与地图信息相关联;在步骤S1306中,第二终端可以将构建地图信息及虚拟对象信息上传至云端。
在步骤S1308中,第一终端获取其摄像模组采集的当前帧图像,并提取图像参数;在步骤S1310中,第一终端将提取到的图像参数上传至云端。
在步骤S1312中,云端利用包括第二终端上传的地图信息在内的预存储的建图结果,对第一终端 上传的图像参数进行特征搜索及匹配,确定出与第一终端上传的图像参数对应的虚拟对象;在步骤S1314中,云端将确定出的虚拟对象发送至第一终端。
在步骤S1316中,第一终端展示虚拟对象,并响应用户操作对虚拟对象进行再编辑;在步骤S1318中,第一终端将再编辑的结果反馈给云端。
在步骤S1320中,云端存储再编辑的结果,并与对应地图信息匹配。
图14示出了应用本公开的增强现实处理方案的一种效果示意图。具体的,用户到达一个场景时,可以利用手机,开启多人AR的应用程序,即可通过界面获取当前场景的信息,如图14所示,展示出“XX故居”的介绍牌140。
本公开方案的应用场景广泛,又例如,可以对建筑物进行虚拟的说明、对餐馆评价进行虚拟展示、为室内商场放置虚拟导航图标以方便后续用户寻找,等等。本公开对此不做限制。
综上所述,应用本公开示例性实施方式的增强现实处理方案,一方面,在获取虚拟对象信息时,无需用户输入房间ID号,而是云端通过搜索与当前帧图像匹配的建图结果而确定出虚拟对象信息,在确定出当前帧图像后,无需用户进行操作,智能匹配出对应的虚拟对象信息,便利性得到了提高;另一方面,相比于需要输入房间ID号获取虚拟对象的方案,本公开方案中,用户无需记忆不同场景的房间ID号;再一方面,本公开方案可以对预先配置的虚拟对象进行再编辑,增强了多人AR体验的趣味性;又一方面,可以利用GPS等手段定位到对应的地图信息,以便快速对图像进行搜索,确定出虚拟对象。
应当注意,尽管在附图中以特定顺序描述了本公开中方法的各个步骤,但是,这并非要求或者暗示必须按照该特定顺序来执行这些步骤,或是必须执行全部所示的步骤才能实现期望的结果。附加的或备选的,可以省略某些步骤,将多个步骤合并为一个步骤执行,以及/或者将一个步骤分解为多个步骤执行等。
进一步的,本示例实施方式中还提供了一种应用于第一终端的增强现实处理装置。
图15示意性示出了本公开的示例性实施方式的应用于第一终端的增强现实处理装置的方框图。参考图15,根据本公开的示例性实施方式的应用于第一终端的增强现实处理装置15可以包括参数上传模块151、虚拟对象获取模块153和虚拟对象编辑模块155。
具体的,参数上传模块151可以用于获取第一终端的摄像模组采集的当前帧图像,提取当前帧图像的图像参数,将图像参数发送至云端,以便云端利用预存储的建图结果确定出与图像参数对应的虚拟对象的信息;虚拟对象获取模块153可以用于接收云端发送的虚拟对象信息并展示虚拟对象;虚拟对象编辑模块155可以用于响应针对虚拟对象的编辑操作,对虚拟对象进行编辑。
基于本公开示例性实施方式的应用于第一终端的增强现实处理装置,一方面,在获取虚拟对象信息时,无需用户输入房间ID号,而是云端通过搜索与当前帧图像匹配的建图结果而确定出虚拟对象信息,在确定出当前帧图像后,无需用户进行操作,智能匹配出对应的虚拟对象信息,便利性得到了提高;另一方面,相比于需要输入房间ID号获取虚拟对象的方案,本公开方案中,用户无需记忆不同场景的房间ID号;再一方面,本公开方案可以对预先配置的虚拟对象进行再编辑,增强了多人AR体验的趣味性。
根据本公开的示例性实施例,虚拟对象编辑模块155还可以将编辑的结果反馈给云端。
根据本公开的示例性实施例,参考图16,相比于增强现实处理装置15,增强现实处理装置16还可以包括位置上传模块161。
具体的,位置上传模块161可以被配置为执行:获取所述第一终端所处场景的位置信息;将所述位置信息发送至所述云端,以便所述云端确定出建图结果的搜索范围并利用所述搜索范围内的建图结果确定出与所述图像参数对应的虚拟对象信息。
根据本公开的示例性实施例,虚拟对象编辑模块155可以被配置为执行:在所述第一终端的界面上,响应针对所述虚拟对象的选定操作,展示所述虚拟对象的编辑子界面;响应在所述编辑子界面上的编辑操作,对所述虚拟对象进行编辑。
根据本公开的示例性实施例,虚拟对象进行编辑的类型包括:删除所述虚拟对象、移动所述虚拟对象、旋转所述虚拟对象、修改所述虚拟对象的属性中至少一种。
根据本公开的示例性实施例,参考图17,相比于增强现实处理装置15,增强现实处理装置17还可以包括虚拟对象添加模块171。
具体的,虚拟对象添加模块171可以被配置为执行:响应虚拟对象添加操作,在所述第一终端所处场景中添加新的虚拟对象;以及将所述新的虚拟对象的信息发送至所述云端。
根据本公开的示例性实施例,图像参数包括所述当前帧图像的二维特征点信息和三维特征点信息,在这种情况下,参数上传模块151提取当前帧图像的图像参数的过程可以被配置为执行:对所述当前帧图像进行二维特征点提取,确定所述当前帧图像的二维特征点信息;获取所述二维特征点信息对应的深 度信息,根据所述二维特征点信息以及所述二维特征点信息对应的深度信息确定所述当前帧图像的三维特征点信息。
根据本公开的示例性实施例,参数上传模块151还可以被配置为执行:获取由所述第一终端的深度感测模组采集的与所述当前帧图像对应的深度信息;将所述当前帧图像与所述当前帧图像对应的深度信息进行配准,确定所述当前帧图像上各像素点的深度信息;从所述当前帧图像上各像素点的深度信息中确定出与所述二维特征点信息对应的深度信息;利用所述二维特征点信息以及与所述二维特征点信息对应的深度信息,确定所述当前帧图像的三维特征点信息。
进一步的,本示例实施方式中还提供了一种应用于云端的增强现实处理装置。
图18示意性示出了本公开的示例性实施方式的应用于云端的增强现实处理装置的方框图。参考图18,根据本公开的示例性实施方式的应用于云端的增强现实处理装置18可以包括参数获取模块181、虚拟对象确定模块183、虚拟对象发送模块185和编辑结果获取模块187。
具体的,参数获取模块181可以用于获取第一终端发送的当前帧图像的图像参数;虚拟对象确定模块183可以用于利用预存储的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息;虚拟对象发送模块185可以用于将虚拟对象信息发送至第一终端,以便在第一终端上展示虚拟对象;编辑结果获取模块187可以用于获取第一终端对虚拟对象进行编辑的结果并存储。
基于本公开示例性实施方式的应用于云端的增强现实处理装置,一方面,在获取虚拟对象信息时,无需用户输入房间ID号,而是云端通过搜索与当前帧图像匹配的建图结果而确定出虚拟对象信息,在确定出当前帧图像后,无需用户进行操作,智能匹配出对应的虚拟对象信息,便利性得到了提高;另一方面,相比于需要输入房间ID号获取虚拟对象的方案,本公开方案中,用户无需记忆不同场景的房间ID号;再一方面,本公开方案可以对预先配置的虚拟对象进行再编辑,增强了多人AR体验的趣味性。
根据本公开的示例性实施例,虚拟对象确定模块183可以被配置为执行:获取所述第一终端所处场景的位置信息;确定与所述位置信息对应的建图结果的搜索范围;利用所述搜索范围内的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息。
根据本公开的示例性实施例,虚拟对象确定模块183可以被配置为执行:从所述预存储的建图结果中,筛选出与所述当前帧图像的图像参数匹配的参考图像,并确定拍摄所述参考图像的第二终端;利用所述当前帧图像的图像参数与所述参考图像的图像参数,确定所述当前帧图像相对于所述第二终端的位姿;根据所述当前帧图像相对于所述第二终端的位姿以及采集所述当前帧图像时所述第一终端的姿态信息,确定所述第一终端与所述第二终端的相对位姿关系;利用所述第一终端与所述第二终端的相对位姿关系,并结合所述第二终端在建图时配置的虚拟对象信息,确定与所述当前帧图像的图像参数对应的虚拟对象信息。
根据本公开的示例性实施例,虚拟对象确定模块183可以被配置为执行:如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则利用迭代最近点方式确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息之间的相对位姿关系,以得到所述当前帧图像相对于所述第二终端的位姿。
根据本公开的示例性实施例,虚拟对象确定模块183可以被配置为执行:确定所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息中的误匹配特征点信息;从所述当前帧图像的三维特征点信息中剔除所述误匹配特征点信息,以便确定所述当前帧图像的剔除所述误匹配特征点信息后的三维特征点信息与所述参考图像的剔除所述误匹配特征点信息后的三维特征点信息之间的相对位姿关系。
根据本公开的示例性实施例,虚拟对象确定模块183可以被配置为执行:如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则将所述当前帧图像的二维特征点信息与所述参考图像的三维特征点信息关联,得到点对信息;利用所述点对信息求解透视n点问题,根据所述当前帧图像的三维特征点信息并结合求解结果确定所述当前帧图像相对于所述第二终端的位姿。
根据本公开的示例性实施例,虚拟对象确定模块183可以被配置为执行:根据求解结果确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系;将根据所述求解结果确定出的所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系作为初始位姿输入,采用迭代最近点方式确定所述当前帧图像的三维特征点信息以及所述参考图像的三维特征点信息之间的相对位姿关系,以确定出所述当前帧图像相对于所述第二终端的位姿。
根据本公开的示例性实施例,参考图19,相比于增强现实处理装置18,增强现实处理装置19还可以包括建图提示模块191。
具体的,建图提示模块191可以被配置为执行:如果在所述预存储的建图结果中不存在与所述当前帧图像的图像参数匹配的参考图像,则向所述第一终端发送建图提示,以提示所述第一终端执行对所处场景的地图构建过程。
根据本公开的示例性实施例,虚拟对象发送模块185可以被配置为执行:确定所述虚拟对象信息的获取权限;如果所述第一终端满足所述获取权限,则执行将虚拟对象信息发送至所述第一终端的过程;如果所述第一终端不满足所述获取权限,则向所述第一终端发送权限错误提示。
根据本公开的示例性实施例,虚拟对象发送模块185可以被配置为执行:获取所述第一终端的标识;如果所述第一终端的标识在所述标识白名单内,则所述第一终端满足所述获取权限;如果所述第一终端的标识不在所述标识白名单内,则所述第一终端不满足所述获取权限。
根据本公开的示例性实施例,参考图20,相比于增强现实处理装置18,增强现实处理装置20还可以包括第一编辑结果处理模块201。
具体的,第一编辑结果处理模块201可以被配置为执行:在获取所述第一终端对所述虚拟对象进行编辑的结果后,利用所述编辑后的虚拟对象信息替换编辑前的虚拟对象信息。
根据本公开的示例性实施例,参考图21,相比于增强现实处理装置18,增强现实处理装置21还可以包括第二编辑结果处理模块211。
具体的,第二编辑结果处理模块211可以被配置为执行:在需要将所述虚拟对象信息发送至第三终端时,将编辑后的虚拟对象信息以及编辑前的虚拟对象信息一并发送至所述第三终端,以便在所述第三终端响应一虚拟对象选择操作,展示编辑后的虚拟对象或编辑前的虚拟对象信息。
根据本公开的示例性实施例,参考图22,相比于增强现实处理装置18,增强现实处理装置22还可以包括新虚拟对象匹配模块2201。
具体的,新虚拟对象匹配模块2201可以被配置为执行:获取所述第一终端发送的新的虚拟对象的信息;将所述新的虚拟对象的信息与所述第一终端所处场景的地图信息匹配。
由于本公开实施方式的增强现实处理装置的各个功能模块与上述方法实施方式中相同,因此在此不再赘述。
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本公开实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、终端装置、或者网络设备等)执行根据本公开实施方式的方法。
此外,上述附图仅是根据本公开示例性实施例的方法所包括的处理的示意性说明,而不是限制目的。易于理解,上述附图所示的处理并不表明或限制这些处理的时间顺序。另外,也易于理解,这些处理可以是例如在多个模块中同步或异步执行的。
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本公开的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。
本领域技术人员在考虑说明书及实践这里公开的内容后,将容易想到本公开的其他实施例。本申请旨在涵盖本公开的任何变型、用途或者适应性变化,这些变型、用途或者适应性变化遵循本公开的一般性原理并包括本公开未公开的本技术领域中的公知常识或惯用技术手段。说明书和实施例仅被视为示例性的,本公开的真正范围和精神由权利要求指出。
应当理解的是,本公开并不局限于上面已经描述并在附图中示出的精确结构,并且可以在不脱离其范围进行各种修改和改变。本公开的范围仅由所附的权利要求来限。

Claims (25)

  1. 一种增强现实处理方法,应用于第一终端,包括:
    获取所述第一终端的摄像模组采集的当前帧图像,提取所述当前帧图像的图像参数,将所述图像参数发送至云端,以便所述云端利用预存储的建图结果确定出与所述图像参数对应的虚拟对象的信息;
    接收所述云端发送的所述虚拟对象的信息并展示所述虚拟对象;
    响应针对所述虚拟对象的编辑操作,对所述虚拟对象进行编辑。
  2. 根据权利要求1所述的增强现实处理方法,其中,所述增强现实处理方法还包括:
    获取所述第一终端所处场景的位置信息;
    将所述位置信息发送至所述云端,以便所述云端确定出建图结果的搜索范围并利用所述搜索范围内的建图结果确定出与所述图像参数对应的虚拟对象信息。
  3. 根据权利要求1或2所述的增强现实处理方法,其中,响应针对所述虚拟对象的编辑操作,对所述虚拟对象进行编辑,包括:
    在所述第一终端的界面上,响应针对所述虚拟对象的选定操作,展示所述虚拟对象的编辑子界面;
    响应在所述编辑子界面上的编辑操作,对所述虚拟对象进行编辑。
  4. 根据权利要求1所述的增强现实处理方法,其中,所述虚拟对象进行编辑的类型包括:
    删除所述虚拟对象、移动所述虚拟对象、旋转所述虚拟对象、修改所述虚拟对象的属性中至少一种。
  5. 根据权利要求1或2所述的增强现实处理方法,其中,所述增强现实处理方法还包括:
    响应虚拟对象添加操作,在所述第一终端所处场景中添加新的虚拟对象;以及
    将所述新的虚拟对象的信息发送至所述云端。
  6. 根据权利要求1所述的增强现实处理方法,其中,所述图像参数包括所述当前帧图像的二维特征点信息和三维特征点信息;其中,提取所述当前帧图像的图像参数包括:
    对所述当前帧图像进行二维特征点提取,确定所述当前帧图像的二维特征点信息;
    获取所述二维特征点信息对应的深度信息,根据所述二维特征点信息以及所述二维特征点信息对应的深度信息确定所述当前帧图像的三维特征点信息。
  7. 根据权利要求6所述的增强现实处理方法,其中,获取所述二维特征点信息对应的深度信息,根据所述二维特征点信息以及所述二维特征点信息对应的深度信息确定所述当前帧图像的三维特征点信息,包括:
    获取由所述第一终端的深度感测模组采集的与所述当前帧图像对应的深度信息;
    将所述当前帧图像与所述当前帧图像对应的深度信息进行配准,确定所述当前帧图像上各像素点的深度信息;
    从所述当前帧图像上各像素点的深度信息中确定出与所述二维特征点信息对应的深度信息;
    利用所述二维特征点信息以及与所述二维特征点信息对应的深度信息,确定所述当前帧图像的三维特征点信息。
  8. 一种增强现实处理方法,应用于云端,包括:
    获取第一终端发送的当前帧图像的图像参数;
    利用预存储的建图结果确定出与所述当前帧图像的图像参数对应的虚拟对象信息;
    将所述虚拟对象信息发送至所述第一终端,以便在所述第一终端上展示虚拟对象;
    获取所述第一终端对所述虚拟对象进行编辑的结果并存储。
  9. 根据权利要求8所述的增强现实处理方法,其中,利用预存储的建图结果确定出与所述当前帧图像的图像参数对应的虚拟对象信息包括:
    获取所述第一终端所处场景的位置信息;
    确定与所述位置信息对应的建图结果的搜索范围;
    利用所述搜索范围内的建图结果确定出与当前帧图像的图像参数对应的虚拟对象信息。
  10. 根据权利要求8或9所述的增强现实处理方法,其中,利用预存储的建图结果确定出与所述当前帧图像的图像参数对应的虚拟对象信息包括:
    从所述预存储的建图结果中,筛选出与所述当前帧图像的图像参数匹配的参考图像,并确定拍摄所述参考图像的第二终端;
    利用所述当前帧图像的图像参数与所述参考图像的图像参数,确定所述当前帧图像相对于所述第二终端的位姿;
    根据所述当前帧图像相对于所述第二终端的位姿以及采集所述当前帧图像时所述第一终端的姿态 信息,确定所述第一终端与所述第二终端的相对位姿关系;
    利用所述第一终端与所述第二终端的相对位姿关系,并结合所述第二终端在建图时配置的虚拟对象信息,确定与所述当前帧图像的图像参数对应的虚拟对象信息。
  11. 根据权利要求10所述的增强现实处理方法,其中,所述当前帧图像的图像参数包含所述当前帧图像的二维特征点信息和三维特征点信息,所述参考图像的图像参数包含所述参考图像的二维特征点信息和三维特征点信息;其中,利用所述当前帧图像的图像参数与所述参考图像的图像参数,确定所述当前帧图像相对于所述第二终端的位姿,包括:
    如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则利用迭代最近点方式确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息之间的相对位姿关系,以得到所述当前帧图像相对于所述第二终端的位姿。
  12. 根据权利要求11所述的增强现实处理方法,其中,在确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息之间的相对位姿关系之前,所述增强现实处理方法还包括:
    确定所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息中的误匹配特征点信息;
    从所述当前帧图像的三维特征点信息中剔除所述误匹配特征点信息,以便确定所述当前帧图像的剔除所述误匹配特征点信息后的三维特征点信息与所述参考图像的剔除所述误匹配特征点信息后的三维特征点信息之间的相对位姿关系。
  13. 根据权利要求10所述的增强现实处理方法,其中,所述当前帧图像的图像参数包含所述当前帧图像的二维特征点信息和三维特征点信息,所述参考图像的图像参数包含所述参考图像的二维特征点信息和三维特征点信息;其中,利用所述当前帧图像的图像参数与所述参考图像的图像参数,确定所述当前帧图像相对于所述第二终端的位姿,包括:
    如果所述当前帧图像的二维特征点信息与所述参考图像的二维特征点信息匹配,则将所述当前帧图像的二维特征点信息与所述参考图像的三维特征点信息关联,得到点对信息;
    利用所述点对信息求解透视n点问题,根据所述当前帧图像的三维特征点信息并结合求解结果确定所述当前帧图像相对于所述第二终端的位姿。
  14. 根据权利要求13所述的增强现实处理方法,其中,结合求解结果确定所述当前帧图像相对于所述第二终端的位姿包括:
    根据求解结果确定所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系;
    将根据所述求解结果确定出的所述当前帧图像的三维特征点信息与所述参考图像的三维特征点信息的相对位姿关系作为初始位姿输入,采用迭代最近点方式确定所述当前帧图像的三维特征点信息以及所述参考图像的三维特征点信息之间的相对位姿关系,以确定出所述当前帧图像相对于所述第二终端的位姿。
  15. 根据权利要求10所述的增强现实处理方法,其中,所述增强现实处理方法还包括:
    如果在所述预存储的建图结果中不存在与所述当前帧图像的图像参数匹配的参考图像,则向所述第一终端发送建图提示,以提示所述第一终端执行对所处场景的地图构建过程。
  16. 根据权利要求8所述的增强现实处理方法,其中,所述增强现实处理方法还包括:
    确定所述虚拟对象信息的获取权限;
    如果所述第一终端满足所述获取权限,则执行将虚拟对象信息发送至所述第一终端的过程;
    如果所述第一终端不满足所述获取权限,则向所述第一终端发送权限错误提示。
  17. 根据权利要求16所述的增强现实处理方法,其中,所述获取权限包括标识白名单;所述增强现实处理方法还包括:
    获取所述第一终端的标识;
    如果所述第一终端的标识在所述标识白名单内,则所述第一终端满足所述获取权限;如果所述第一终端的标识不在所述标识白名单内,则所述第一终端不满足所述获取权限。
  18. 根据权利要求8所述的增强现实处理方法,其中,在获取所述第一终端对所述虚拟对象进行编辑的结果后,所述增强现实处理方法还包括:
    利用所述编辑后的虚拟对象信息替换编辑前的虚拟对象信息。
  19. 根据权利要求8所述的增强现实处理方法,其中,在获取所述第一终端对所述虚拟对象进行编辑的结果后,所述增强现实处理方法还包括:
    在需要将所述虚拟对象信息发送至第三终端时,将编辑后的虚拟对象信息以及编辑前的虚拟对象信息一并发送至所述第三终端,以便在所述第三终端响应一虚拟对象选择操作,展示编辑后的虚拟对象或编辑前的虚拟对象信息。
  20. 根据权利要求8所述的增强现实处理方法,其中,所述增强现实处理方法还包括:
    获取所述第一终端发送的新的虚拟对象的信息;
    将所述新的虚拟对象的信息与所述第一终端所处场景的地图信息匹配。
  21. 一种增强现实处理装置,应用于第一终端,包括:
    参数上传模块,被配置为获取所述第一终端的摄像模组采集的当前帧图像,提取所述当前帧图像的图像参数,将所述图像参数发送至云端,以便所述云端利用预存储的建图结果确定出与所述图像参数对应的虚拟对象的信息;
    虚拟对象获取模块,被配置为接收所述云端发送的所述虚拟对象的信息并展示所述虚拟对象;
    虚拟对象编辑模块,被配置为响应针对所述虚拟对象的编辑操作,对所述虚拟对象进行编辑。
  22. 一种增强现实处理装置,应用于云端,包括:
    参数获取模块,被配置为获取第一终端发送的当前帧图像的图像参数;
    虚拟对象确定模块,被配置为利用预存储的建图结果确定出与所述当前帧图像的图像参数对应的虚拟对象信息;
    虚拟对象发送模块,被配置为将所述虚拟对象信息发送至所述第一终端,以便在所述第一终端上展示虚拟对象;
    编辑结果获取模块,被配置为获取所述第一终端对所述虚拟对象进行编辑的结果并存储。
  23. 一种增强现实处理系统,包括:
    第一终端,被配置为获取摄像模组采集的当前帧图像,提取所述当前帧图像的图像参数,将所述图像参数发送至云端;获取所述云端发送的虚拟对象信息并展示虚拟对象;响应针对所述虚拟对象的编辑操作,对所述虚拟对象进行编辑,并将编辑的结果反馈给所述云端;
    云端,被配置为获取所述图像参数;利用预存储的建图结果确定出与所述当前帧图像的图像参数对应的虚拟对象信息,并将所述虚拟对象信息发送至所述第一终端;获取所述第一终端对所述虚拟对象进行编辑的结果并存储。
  24. 一种计算机可读存储介质,其上存储有计算机程序,所述程序被处理器执行时实现如权利要求1至20中任一项所述的增强现实处理方法。
  25. 一种电子设备,包括:
    处理器;
    存储器,被配置为存储一个或多个程序,当所述一个或多个程序被所述处理器执行时,使得所述处理器实现如权利要求1至20中任一项所述的增强现实处理方法。
PCT/CN2020/137279 2019-12-24 2020-12-17 增强现实处理方法及装置、系统、存储介质和电子设备 WO2021129514A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20906655.4A EP4080464A4 (en) 2019-12-24 2020-12-17 AUGMENTED REALITY PROCESSING METHOD, APPARATUS AND SYSTEM, AND STORAGE MEDIUM, AND ELECTRONIC DEVICE
US17/847,273 US20220319136A1 (en) 2019-12-24 2022-06-23 Augmented reality processing method, storage medium, and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911348471.0 2019-12-24
CN201911348471.0A CN111179435B (zh) 2019-12-24 2019-12-24 增强现实处理方法及装置、系统、存储介质和电子设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/847,273 Continuation US20220319136A1 (en) 2019-12-24 2022-06-23 Augmented reality processing method, storage medium, and electronic device

Publications (1)

Publication Number Publication Date
WO2021129514A1 true WO2021129514A1 (zh) 2021-07-01

Family

ID=70653963

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/137279 WO2021129514A1 (zh) 2019-12-24 2020-12-17 增强现实处理方法及装置、系统、存储介质和电子设备

Country Status (4)

Country Link
US (1) US20220319136A1 (zh)
EP (1) EP4080464A4 (zh)
CN (1) CN111179435B (zh)
WO (1) WO2021129514A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113840049A (zh) * 2021-09-17 2021-12-24 阿里巴巴(中国)有限公司 图像处理方法、视频流场景切换方法、装置、设备及介质

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111179435B (zh) * 2019-12-24 2024-02-06 Oppo广东移动通信有限公司 增强现实处理方法及装置、系统、存储介质和电子设备
CN113298588A (zh) * 2020-06-19 2021-08-24 阿里巴巴集团控股有限公司 提供物品对象信息的方法、装置及电子设备
CN112070906A (zh) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 一种增强现实系统及增强现实数据的生成方法、装置
CN112070907A (zh) * 2020-08-31 2020-12-11 北京市商汤科技开发有限公司 一种增强现实系统及增强现实数据的生成方法、装置
CN112070903A (zh) * 2020-09-04 2020-12-11 脸萌有限公司 虚拟对象的展示方法、装置、电子设备及计算机存储介质
CN112365530A (zh) * 2020-11-04 2021-02-12 Oppo广东移动通信有限公司 增强现实处理方法及装置、存储介质和电子设备
CN112837241A (zh) * 2021-02-09 2021-05-25 贵州京邦达供应链科技有限公司 建图重影去除方法、设备及存储介质
CN113959444A (zh) * 2021-09-30 2022-01-21 达闼机器人有限公司 用于无人设备的导航方法、装置、介质及无人设备
CN114554112B (zh) * 2022-02-18 2023-11-28 北京达佳互联信息技术有限公司 视频录制方法、装置、终端及存储介质
CN115908627B (zh) * 2022-11-21 2023-11-17 北京城市网邻信息技术有限公司 房源数据的处理方法、装置、电子设备及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102598064A (zh) * 2009-10-12 2012-07-18 Metaio有限公司 用于在真实环境的视图中描绘虚拟信息的方法
CN102695032A (zh) * 2011-02-10 2012-09-26 索尼公司 信息处理装置、信息共享方法、程序以及终端设备
US20150193982A1 (en) * 2014-01-03 2015-07-09 Google Inc. Augmented reality overlays using position and orientation to facilitate interactions between electronic devices
CN108510597A (zh) * 2018-03-09 2018-09-07 北京小米移动软件有限公司 虚拟场景的编辑方法、装置及非临时性计算机可读存储介质
CN108805635A (zh) * 2017-04-26 2018-11-13 联想新视界(北京)科技有限公司 一种对象的虚拟显示方法和虚拟设备
CN111179435A (zh) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 增强现实处理方法及装置、系统、存储介质和电子设备

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170153787A1 (en) * 2015-10-14 2017-06-01 Globalive Xmg Jv Inc. Injection of 3-d virtual objects of museum artifact in ar space and interaction with the same
CN107025662B (zh) * 2016-01-29 2020-06-09 成都理想境界科技有限公司 一种实现增强现实的方法、服务器、终端及系统
US10373381B2 (en) * 2016-03-30 2019-08-06 Microsoft Technology Licensing, Llc Virtual object manipulation within physical environment
CN106355153B (zh) * 2016-08-31 2019-10-18 上海星视度科技有限公司 一种基于增强现实的虚拟对象显示方法、装置以及系统
US10147237B2 (en) * 2016-09-21 2018-12-04 Verizon Patent And Licensing Inc. Foreground identification for virtual objects in an augmented reality environment
WO2019015261A1 (en) * 2017-07-17 2019-01-24 Chengdu Topplusvision Technology Co., Ltd. DEVICES AND METHODS FOR SCENE DETERMINATION
CN108550190A (zh) * 2018-04-19 2018-09-18 腾讯科技(深圳)有限公司 增强现实数据处理方法、装置、计算机设备和存储介质
EP3746869A1 (en) * 2018-05-07 2020-12-09 Google LLC Systems and methods for anchoring virtual objects to physical locations
CN108734736B (zh) * 2018-05-22 2021-10-26 腾讯科技(深圳)有限公司 相机姿态追踪方法、装置、设备及存储介质
JP6548241B1 (ja) * 2018-07-14 2019-07-24 株式会社アンジー 拡張現実プログラム及び情報処理装置
US11890063B2 (en) * 2018-12-17 2024-02-06 The Brigham And Women's Hospital, Inc. System and methods for a trackerless navigation system
CN109729285B (zh) * 2019-01-17 2021-03-23 广州方硅信息技术有限公司 熔线格特效生成方法、装置、电子设备及存储介质
CN110275968A (zh) * 2019-06-26 2019-09-24 北京百度网讯科技有限公司 图像数据处理方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102598064A (zh) * 2009-10-12 2012-07-18 Metaio有限公司 用于在真实环境的视图中描绘虚拟信息的方法
CN102695032A (zh) * 2011-02-10 2012-09-26 索尼公司 信息处理装置、信息共享方法、程序以及终端设备
US20150193982A1 (en) * 2014-01-03 2015-07-09 Google Inc. Augmented reality overlays using position and orientation to facilitate interactions between electronic devices
CN108805635A (zh) * 2017-04-26 2018-11-13 联想新视界(北京)科技有限公司 一种对象的虚拟显示方法和虚拟设备
CN108510597A (zh) * 2018-03-09 2018-09-07 北京小米移动软件有限公司 虚拟场景的编辑方法、装置及非临时性计算机可读存储介质
CN111179435A (zh) * 2019-12-24 2020-05-19 Oppo广东移动通信有限公司 增强现实处理方法及装置、系统、存储介质和电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4080464A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113840049A (zh) * 2021-09-17 2021-12-24 阿里巴巴(中国)有限公司 图像处理方法、视频流场景切换方法、装置、设备及介质

Also Published As

Publication number Publication date
EP4080464A4 (en) 2023-10-04
CN111179435B (zh) 2024-02-06
CN111179435A (zh) 2020-05-19
US20220319136A1 (en) 2022-10-06
EP4080464A1 (en) 2022-10-26

Similar Documents

Publication Publication Date Title
WO2021129514A1 (zh) 增强现实处理方法及装置、系统、存储介质和电子设备
WO2021175022A1 (zh) 地图构建方法、重定位方法及装置、存储介质和电子设备
WO2021203902A1 (zh) 虚拟影像实现方法、装置、存储介质与终端设备
JP7058760B2 (ja) 画像処理方法およびその、装置、端末並びにコンピュータプログラム
WO2021184952A1 (zh) 增强现实处理方法及装置、存储介质和电子设备
CN105450736B (zh) 与虚拟现实连接的方法和装置
CN111243632A (zh) 多媒体资源的生成方法、装置、设备及存储介质
WO2022095537A1 (zh) 虚拟对象显示方法及装置、存储介质和电子设备
WO2021244140A1 (zh) 物体测量、虚拟对象处理方法及装置、介质和电子设备
JP2016531362A (ja) 肌色調整方法、肌色調整装置、プログラム及び記録媒体
CN111835531B (zh) 会话处理方法、装置、计算机设备及存储介质
CN112118477B (zh) 虚拟礼物展示方法、装置、设备以及存储介质
CN110853095B (zh) 相机定位方法、装置、电子设备及存储介质
CN111311758A (zh) 增强现实处理方法及装置、存储介质和电子设备
JP2018503148A (ja) ビデオ再生のための方法および装置
CN112270754A (zh) 局部网格地图构建方法及装置、可读介质和电子设备
CN111246300A (zh) 剪辑模板的生成方法、装置、设备及存储介质
CN112052897A (zh) 多媒体数据拍摄方法、装置、终端、服务器及存储介质
CN111031391A (zh) 视频配乐方法、装置、服务器、终端及存储介质
CN110225331B (zh) 选择性地将色彩施加到图像
US11816269B1 (en) Gesture recognition for wearable multimedia device using real-time data streams
WO2020056694A1 (zh) 增强现实的通信方法及电子设备
CN113190307A (zh) 控件添加方法、装置、设备及存储介质
CN112423011A (zh) 消息回复方法、装置、设备及存储介质
WO2021129444A1 (zh) 文件聚类方法及装置、存储介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20906655

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020906655

Country of ref document: EP

Effective date: 20220722