WO2021104037A1 - 数据处理方法、装置、电子设备及存储介质 - Google Patents

数据处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2021104037A1
WO2021104037A1 PCT/CN2020/128443 CN2020128443W WO2021104037A1 WO 2021104037 A1 WO2021104037 A1 WO 2021104037A1 CN 2020128443 W CN2020128443 W CN 2020128443W WO 2021104037 A1 WO2021104037 A1 WO 2021104037A1
Authority
WO
WIPO (PCT)
Prior art keywords
map
data
real scene
target
virtual object
Prior art date
Application number
PCT/CN2020/128443
Other languages
English (en)
French (fr)
Inventor
黄锋华
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to EP20891862.3A priority Critical patent/EP4057109A4/en
Publication of WO2021104037A1 publication Critical patent/WO2021104037A1/zh
Priority to US17/723,319 priority patent/US20220245859A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • This application relates to the field of display technology, and more specifically, to a data processing method, device, electronic device, and storage medium.
  • Augmented Reality is a technology that increases the user's perception of the real world through information provided by a computer system. It has been widely used in various fields such as education, games, and medical treatment. As a result, multi-person AR solutions have also begun to appear. In the traditional multi-person AR solution, relocation is used to display their respective virtual objects in the virtual scene of the other party, but it is usually necessary for the device to be in the same scene to complete the relocation, which makes the relocation difficult.
  • this application proposes a data processing method, device, electronic equipment, and storage medium.
  • an embodiment of the present application provides a data processing method applied to a first device.
  • the method includes: acquiring an image containing a first target object, where the first target object is located. According to the image, construct a map corresponding to the first real scene to obtain map data; transmit the map data to a second device, and the map data is used to indicate the second
  • the device performs relocation according to the map data and a second target, the first target and the second target have at least two identical feature points, and the second target is in the second device In the second real scene where it is located, the second real scene is different from the first real scene.
  • an embodiment of the present application provides a data processing method applied to a second device.
  • the method includes: acquiring map data transmitted by the first device, where the map data is obtained by the first device according to The image containing the first target object is obtained by constructing a map corresponding to the first real scene where the first device is located, and the first target object is in the first real scene; acquiring the image containing the second target object , The second target is in a second real scene where the second device is located, the first target and the second target have at least two or more identical feature points, and the second reality The scene is different from the first real scene; relocation is performed according to the map data and the image containing the second target object.
  • an embodiment of the present application provides a data processing device applied to a first device.
  • the device includes: a first image acquisition module, a map construction module, and a map transmission module, wherein the first image acquisition module Is used to obtain an image containing a first target, and the first target is in a first real scene where the first device is located; the map construction module is used to construct the first object in accordance with the image
  • the map corresponding to the real scene obtains map data; the map transmission module is used for transmitting the map data to the second device, and the map data is used for instructing the second device according to the map data and the second device.
  • Relocating a target, the first target and the second target have at least two or more identical feature points, and the second target is in a second real scene where the second device is located, The second real scene is different from the first real scene.
  • an embodiment of the present application provides a data processing device applied to a second device.
  • the device includes: a map acquisition module, a second image acquisition module, and a relocation module, wherein the map acquisition module is used for Acquire map data transmitted by the first device, where the map data is obtained by the first device constructing a map corresponding to the first real scene where the first device is located according to the acquired image containing the first target object , The first target is in the first real scene; the second image acquisition module is used to acquire an image containing a second target, and the second target is in the second place where the second device is located.
  • the first target and the second target have at least two or more identical feature points, and the second real scene is different from the first real scene; the relocation module is used for The map data and the image are relocated.
  • an embodiment of the present application provides an electronic device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and It is configured to be executed by the one or more processors, and the one or more programs are configured to execute the data processing method provided in the above-mentioned first aspect or execute the data processing method provided in the above-mentioned second aspect.
  • an embodiment of the present application provides a computer-readable storage medium.
  • the computer-readable storage medium stores program code, and the program code can be called by a processor to execute the data provided in the first aspect. Processing method, or execute the data processing method provided in the second aspect above.
  • Fig. 1 shows a schematic diagram of an application scenario applicable to an embodiment of the present application.
  • Fig. 2 shows another schematic diagram of an application scenario applicable to an embodiment of the present application.
  • Fig. 3 shows a flowchart of a data processing method according to an embodiment of the present application.
  • Fig. 4 shows a flowchart of a data processing method according to another embodiment of the present application.
  • FIG. 5 shows a schematic diagram of a display effect provided by another embodiment of the present application.
  • FIG. 6 shows a schematic diagram of another display effect provided by another embodiment of the present application.
  • Fig. 7 shows a flowchart of a data processing method according to another embodiment of the present application.
  • Fig. 8 shows a flowchart of a data processing method according to still another embodiment of the present application.
  • FIG. 9 shows a schematic diagram of a display effect provided by still another embodiment of the present application.
  • FIG. 10 shows a schematic diagram of another display effect provided by still another embodiment of the present application.
  • FIG. 11 shows another schematic diagram of a display effect provided by still another embodiment of the present application.
  • Fig. 12 shows a flowchart of a data processing method according to yet another embodiment of the present application.
  • Fig. 13 shows a block diagram of a data processing device according to an embodiment of the present application.
  • Fig. 14 shows a block diagram of a data processing device according to another embodiment of the present application.
  • FIG. 15 is a block diagram of an electronic device for executing the data processing method according to the embodiment of the present application according to an embodiment of the present application.
  • FIG. 16 is a storage unit for storing or carrying program code for implementing the data processing method according to the embodiment of the present application according to an embodiment of the present application.
  • Augmented Reality is a technology that increases the user's perception of the real world through information provided by a computer system. It superimposes computer-generated virtual objects, scenes, or system prompts and other content objects onto the real scene to enhance or modify Perception of the real world environment or data representing the real world environment.
  • the master and slave add and display their respective virtual objects based on synchronous localization and mapping (slam) technology, and then use relocation
  • the respective virtual objects are displayed in the opponent's virtual scene, and the virtual objects can be operated respectively to interact, for example, two game models are competing.
  • relocation is mainly to enable the master and slave to understand each other's position, that is, they need to share a common coordinate system.
  • the coordinate system can be the world coordinate system or the coordinate system of the master.
  • the inventor found that in the traditional multi-person AR technical solution, although the virtual objects between each device can be synchronized to each other's scene through relocation, it generally requires two devices to be in the same place and similar.
  • the relocation operation can be completed only when the relocation operation is performed from the angle, which will make it difficult for the user to complete the relocation without using experience or guidance, resulting in a poor user experience.
  • the inventors proposed the data processing method, device, electronic device, and storage medium provided by the embodiments of the present application, which can realize relocation according to the corresponding target in different real scenes, which is convenient for multi-person AR in augmented reality.
  • the realization of the scheme improves the user experience.
  • the specific data processing method will be described in detail in the subsequent embodiments.
  • FIG. 1 shows a schematic diagram of an application scenario of the data processing method provided by an embodiment of the present application.
  • the application scenario includes a data processing system 10.
  • the data processing system 10 can be used in a multi-person AR scenario.
  • the system 10 may include multiple electronic devices.
  • a first device 100 and a second device 200 are exemplarily shown in FIG. 1.
  • the electronic device may be a head-mounted display device, or a mobile device such as a mobile phone or a tablet computer.
  • the head-mounted display device may be an integrated head-mounted display device.
  • the electronic equipment can also be a smart terminal such as a mobile phone connected to an external/accessible head-mounted display device, that is, the electronic equipment can be used as a processing and storage device of the head-mounted display device, plugged into or connected to the external head-mounted display device, The virtual content is displayed on the head-mounted display device.
  • the electronic device can also be a separate mobile terminal such as a mobile phone, and the mobile terminal can generate a virtual scene and display it on the screen.
  • different electronic devices may be in different real-world scenarios, and the electronic devices may communicate with each other.
  • the real scene of each electronic device can be set with a target, the target can be used for the electronic device to build a map or relocate, the target of the real scene of different electronic devices can be the same, or they can be bound to each other .
  • the real scene of the first device 100 in FIG. 1 may be provided with a first target, and the first device 100 may scan the first target and construct a map. After the map is constructed, the map data may be sent to the second target.
  • the device 200 and the second device 200 can perform relocation according to the second target in the real scene where they are located, and the subsequent first device 100 and the second device 200 can synchronize the virtual content according to the result of the relocation.
  • FIG. 2 shows another schematic diagram of the application scenario of the data processing method provided by the embodiment of the present application.
  • the application scenario includes a data processing system 10, and the data processing system 10 can be used in a multi-person AR scenario.
  • the processing system 10 may include multiple electronic devices and servers, and the multiple electronic devices can communicate with the server.
  • FIG. 2 exemplarily shows the first device 100, the second device 200, and the server 300.
  • the server 300 may be a traditional server, or a cloud server or the like.
  • the electronic devices can transmit data through the server, that is, the electronic device can transmit the data to the server, and the server can transmit the data to other electronic devices.
  • the first device 100 can construct a map based on scanning the first target, and send the map data to the server 300, and then the server 300 transmits the map data to the second device 200, and the second device 200 can be based on its location.
  • the second target in the real scene is relocated, and then the first device 100 and the second device 200 can synchronize the virtual content according to the result of the relocation.
  • FIG. 3 shows a schematic flowchart of a data processing method provided by an embodiment of the present application.
  • the data processing method is applied to the first device in the aforementioned data processing system, and the data processing system further includes a second device.
  • the following will elaborate on the process shown in FIG. 3, and the data processing method may specifically include the following steps:
  • Step S110 Obtain an image including a first target object, the first target object being in a first real scene where the first device is located.
  • a first target may be set in a real scene where the first device is located, and the first target may be a physical object with certain texture characteristics.
  • the first target is used by the first device to scan the first target to construct a map.
  • the first target object may include a pattern with a set texture, so that the first device can recognize the pattern in the image, thereby identifying the characteristic points in the pattern, and then constructing a map based on the characteristic points.
  • the specific shape and size of the first target may not be limited.
  • the outline of the target may be rectangular and the size may be 1 square meter.
  • the shape and size of the target may also be other shapes and sizes.
  • the first device may include an image capture device, which is used to capture images of a real scene and the like.
  • the image acquisition device may be an infrared camera or a visible light camera, and the specific type of the image acquisition device is not limited in the embodiments of the present application.
  • the first device can perform image acquisition on the first target object through the image acquisition device to obtain an image containing the first target object.
  • an image capture device may also be connected to the device, so that the device can perform image capture on the first target object through an external image capture device to obtain an image containing the first target object.
  • the image acquisition device can also be consistent with the pose of the device, so that the device can recognize the pose and so on based on the image collected by the external image acquisition device.
  • Step S120 According to the image, construct a map corresponding to the first real scene to obtain map data.
  • the first device may construct a map based on the image containing the first target object.
  • the first device may also obtain the pose data when the first device acquires the image, and construct a map based on the pose data of the first device and the image containing the first target.
  • the first device can recognize the feature points in the image of the first target.
  • the information of the feature points of the first target object may be pre-stored with the first device, and these feature points are associated with the content of the map.
  • the first device may recognize the characteristic points of the first target object according to the characteristic information of the characteristic points stored in advance, so as to identify each characteristic point of the first target object. After the first device recognizes and obtains each feature point of the first target, it can determine the corresponding map content according to each feature point. Then, the first device determines the location of each map content in the spatial coordinate system corresponding to the first device according to the pose data of the first device, and constructs a map based on each map content and its corresponding location.
  • the constructed map can be used as a map corresponding to the first real scene where the first device is located. It is understandable that by constructing a map, the content data of the map, the location of each content, the pose data of the device, the information of each feature point of the first target, etc. can also be obtained. These data can be carried in the map. Of course, the map The specific data in can not be regarded as a limitation.
  • Step S130 Transmit the map data to the second device, where the map data is used to instruct the second device to relocate according to the map data and a second target object.
  • the second target has at least two identical feature points, the second target is in a second real scene where the second device is located, and the second real scene is different from the first real scene.
  • the first device may transmit the map data to the second device.
  • the second device obtains the map data, it can perform relocation based on the map data and the second target in the second real scene where the second device is located, so as to obtain the positional relationship between the first device and the second device , Realize the alignment of the space coordinate system corresponding to the first device and the space coordinate system corresponding to the second device, so that the first device and the second device can display and interact with virtual content according to the aligned space coordinate system.
  • the first target and the second target have at least two identical characteristic points.
  • the first target can be the same as the second target, that is, all the feature points are the same.
  • the first device can recognize the feature points obtained by the first target and construct a map based on the feature points, and the second device can pass The feature points of the second target are recognized, and the map is recognized, and the content of the map constructed by the first device is the same as the content of the map recognized by the second device, so that the second device can subsequently identify the map according to the second target, and the first A map constructed by a device for relocation.
  • the first target and the second target have partially identical feature points, and the number of the partially identical feature points is at least two, and the map content corresponding to other different feature points of the first target may be the same as that of the second target.
  • the map content corresponding to the other different feature points of the target object corresponds to the map content. Therefore, the map content constructed by the first device according to the first target object and the map content recognized by the second device according to the second target object are partially identical.
  • Map content while other different map content corresponds to each other, the second device can also implement relocation based on the same map content and the map content corresponding to each other. The relocation operation of the second device is introduced in the subsequent embodiments.
  • an image containing a first target object is acquired, and the first target object is located in a first real scene where the first device is located, and based on the image, a data corresponding to the first real scene is constructed Map, obtain map data, and then send the map data to the second device.
  • the map data is used to instruct the second device to relocate according to the map data and the second target, and the first target and the second target have at least two With the same feature point, the second target is in the second real scene where the second device is located, and the first real scene is different from the second real scene, so that relocation can be achieved according to the target in different real scenes, This facilitates the realization of the multi-person AR solution and improves the user experience.
  • FIG. 4 shows a schematic flowchart of a data processing method provided by another embodiment of the present application.
  • the data processing method is applied to the first device in the aforementioned data processing system, and the data processing system further includes a second device.
  • the following will elaborate on the process shown in FIG. 4, and the data processing method may specifically include the following steps:
  • Step S210 Obtain an image containing a first target object, the first target object is in the first real scene where the first device is located, and the first target object includes pre-generated images based on the first real scene picture of.
  • the first target object may include a pre-generated pattern, and the pattern may be pre-generated according to the first real scene, that is, the feature points in the pattern correspond to the first real scene.
  • the pattern may be a pattern with texture characteristics, and multiple feature points in the pattern may correspond to the entity content in the first real scene, so that the subsequent map constructed by the first device according to the image containing the first target It can correspond to the first real scene.
  • the above-mentioned pattern may be set in the first real scene in the form of a map, for example, by printing a pre-generated pattern, and pasting the printed pattern on the first real scene.
  • the above-mentioned pattern may also be set on a physical object (such as a wooden board) in the form of a texture, and the physical object pasted with the texture may be placed in the first real scene.
  • a physical object such as a wooden board
  • the specific form of the specific first target may not be regarded as a limitation.
  • the first device when acquiring the image containing the first target object, may acquire the image containing only the first target object, so as to increase the probability of successful map construction. It is understandable that since only the map content corresponding to the feature information in the above pattern is pre-established, if other feature points are recognized, the map content cannot be matched with other feature points, which may cause interruption in the process of constructing the map. , Or there are non-map areas in the constructed map.
  • the first device may also obtain multiple images containing the first target object, and the multiple images containing the first target object may be obtained by image capture of the first target object at multiple shooting angles. image.
  • the multiple shooting angles may be multiple shooting angles during a 360-degree rotation along the center of the first target.
  • the brightness of the ambient light in the first real scene can also be detected. If the brightness of the ambient light is lower than the preset brightness, it can also be supplemented to improve the brightness. The efficiency and quality of mapping.
  • Step S220 Identify the pattern in the image, and obtain characteristic information in the pattern, where the characteristic information corresponds to the first real scene.
  • the first device can recognize the pattern in the image, thereby obtaining characteristic information in the pattern.
  • the texture map is generated in advance according to the first real scene, the first device recognizes the pattern in the image, and the characteristic information of the obtained pattern may correspond to the first real scene.
  • the feature information in the pattern may include multiple feature points in the pattern, and the multiple feature points in the pattern may correspond to the entity content in the first real scene, so that the subsequent first device will follow the information containing the first target object.
  • Image the constructed map can correspond to the first real scene.
  • Step S230 Obtain the pose data of the first device.
  • the constructed map since the constructed map is to be used for relocation of the second device, the constructed map should carry the pose data of the first device.
  • the first device may calculate the pose of the first device based on the collected image containing the first target and the data (motion data) collected by an inertial measurement unit (IMU) data.
  • the pose data is used to characterize the position and posture of the first device in the spatial coordinate system corresponding to the first device.
  • the position can be represented by spatial coordinates
  • the posture can be represented by a rotation angle.
  • the specific manner in which the first terminal obtains the pose data may not be a limitation.
  • the first device may also use a positioning module related to the position and the pose to obtain the pose data.
  • Step S240 According to the feature information and the pose data, a map corresponding to the first real scene is generated to obtain map data.
  • the first device recognizes and obtains the feature information of the pattern of the first target, and after obtaining the pose data of the first device, it can generate the first reality based on the feature information and the pose data.
  • the first device may determine the map content corresponding to the first real scene based on the feature information, and then determine the spatial coordinate system corresponding to each map content in the first device based on the pose data of the first device According to the location in each map and its corresponding location, a map is constructed and the map data is obtained.
  • the map data may carry the feature information in the above pattern, the pose data of the first device, the content data of the map, the location of each map content, etc.
  • the specific data carried may not be limited.
  • the spatial coordinate system corresponding to the first device may be the world coordinate system with the first device as the origin, or the camera coordinate system of the first device, etc., which is not limited herein.
  • the first device may create multiple maps in the above manner, and select a map that meets specified conditions from the multiple maps as the map that needs to be transmitted to the second device.
  • the specified condition may be that the degree of matching between the content in the map and the first real scene is higher than a set threshold, and the threshold may be 90%, 95%, etc., and the specific value may not be a limitation. Understandably, the higher the match between the content in the map and the first real scene, the better the constructed map can match the first real scene, and the better the quality of the constructed map. The quality can be filtered out by the above methods. A poor map to avoid subsequent map data transmission to the second device, resulting in relocation failure.
  • Step S250 Transmit the map data to a server, where the server is used to transmit the map data to the second device, and the map data is used to instruct the second device according to the map data and the second device.
  • the target is repositioned, the first target and the second target have at least two identical feature points, and the second target is in a second real scene where the second device is located, so The second real scene is different from the first real scene.
  • step S250 can refer to the content of the foregoing embodiment, which will not be repeated here.
  • Step S260 When the instruction information is obtained, control the first device to superimpose a virtual object in the first real scene, where the instruction information is used to indicate that the second device is successfully relocated.
  • the first device can monitor the received information.
  • the indication information used to indicate the successful relocation of the second device it means that the first device and the second device can display and operate the virtual Objects, and can synchronize the displayed virtual objects. Therefore, the first device can generate and display a virtual object.
  • the first device may also generate virtual prompt content and display the prompt content, which can be used to prompt the first
  • the user corresponding to the device can add or display virtual objects, so that the user can know the current multi-person AR available.
  • the first device may detect the user's operation.
  • the first device When an adding operation for adding a virtual object is detected, the first device superimposes the virtual object on the first real scene according to the adding operation.
  • the addition operation can be the user's touch operation on the display screen, which can be triggered by a set sliding gesture, sliding track, etc.; the addition operation can also be gesture recognition based on the captured gesture image, and the recognized gesture After the set gesture, it is determined that the adding operation is detected, and the specific adding operation form may not be limited.
  • the first device superimposes the virtual object into the first real scene. According to the transformation relationship between the virtual space and the real space, the virtual object needs to be superimposed on the position of the first real scene, mapped to the virtual space, and generates the virtual object. In this way, the virtual object is superimposed in the first real scene.
  • the first device may also superimpose the preset virtual object on the first real scene when it obtains the indication information for indicating the successful relocation of the second device. For example, in a game scene, after the relocation of the second device is completed, the first device may superimpose a preset virtual game character corresponding to the first device onto the first real scene.
  • the first device may be a mobile terminal, such as a mobile phone, a tablet computer, etc., and the virtual object may be displayed on the display screen of the mobile terminal.
  • the first device can acquire the scene image of the first real scene.
  • the scene image is acquired by the image acquisition device on the first device.
  • the first device can also acquire the superposition of the virtual object that needs to be superimposed on the first real scene. Position, and then the first device can determine the pixel coordinates of the virtual object according to the superimposed position, and finally, according to the pixel coordinates, synthesize the virtual object with the scene image to obtain a composite image.
  • the first device determines the pixel coordinates of the virtual object according to the superimposed position. It may be that the first device aligns the spatial coordinate system in the real space with the spatial coordinate system in the virtual space, that is, knows the conversion relationship between the two Then, the pixel coordinates of the virtual object fused into the scene image are determined according to the superimposed position.
  • the first device synthesizes the virtual object and the scene image, it can merge the virtual object into the scene image according to the pixel coordinates to obtain the synthesized image.
  • the virtual object and the physical object in the scene image are merged together, and then displayed The image can enable the user to observe the display effect of augmented reality.
  • the first device may be a head-mounted display device, or a mobile terminal connected to an external head-mounted display device, that is, the virtual object is displayed through the head-mounted display device.
  • the first device can obtain the superimposition position of the virtual object that needs to be superimposed on the first real scene, and the content data of the virtual object, and generate the virtual object to realize the superimposition of the virtual object on the first real scene.
  • the first device can convert the superimposed position into a spatial position in the virtual space according to the superimposed position and the conversion relationship between the spatial coordinate system in the real space and the spatial coordinate system in the virtual space, and obtain the virtual object in the virtual space.
  • the spatial position that needs to be displayed in the virtual space is rendered according to the spatial position and the content data of the virtual object, thereby completing the generation of the virtual object.
  • the virtual object needs to be superimposed on the superimposition position of the first real scene, which can be determined according to the detected adding operation.
  • the first device is a mobile terminal, and when a virtual object needs to be displayed on the display screen of the mobile terminal, the mobile terminal can display a scene image on the display screen so that the user can determine the superimposition position according to the scene image, and then according to the user’s screen on the screen. Touch operation can determine the superimposed position of virtual objects.
  • the first device is a head-mounted display device
  • the user's gesture action in the first real scene can be detected, and the superimposed position of the virtual object can be determined according to the position of the gesture action.
  • the manner of specifically determining the superimposition position of the virtual object that needs to be superimposed on the first real scene is not limited, for example, the superimposition position may also be preset.
  • Step S270 Display the virtual object.
  • the first device after the first device superimposes the virtual object on the first real scene, it can display the virtual object so that the user can see the effect of superimposing the virtual object on the real world, that is, realizing the display of augmented reality. effect.
  • the first device when the first device is a mobile terminal, and the virtual object needs to be displayed on the display screen of the mobile terminal.
  • the first device can display the composite image obtained by compositing the virtual object and the scene image on the display screen, thereby realizing the display effect of augmented reality.
  • the first device when the first device is a head-mounted display device, and the virtual object is displayed through the head-mounted display device.
  • the screen display data of the virtual object can be obtained.
  • the screen display data can include the RGB value of each pixel in the display screen and the corresponding pixel coordinates.
  • the first device can generate the virtual screen according to the screen display data, And the generated virtual screen is projected onto the display lens through the projection module to display the virtual object.
  • the user can see the virtual screen superimposed and displayed at the corresponding position in the first real scene through the display lens, so as to realize the augmented reality.
  • the display effect for example, please refer to Figure 5.
  • the user after adding a virtual game character A, the user can observe the virtual game character A in the first real scene through the display lens, or The first target object 11 in the current field of view in the first reality scene is observed, and the display effect of augmented reality is realized.
  • Step S280 Obtain the pose data and display data of the virtual object.
  • the first device after the first device generates and displays the virtual object, the second device needs to also display the virtual object simultaneously. Therefore, the first device can also determine the pose data and display data of the virtual object according to the generated virtual object.
  • the pose data can be the position and posture of the virtual object in the virtual space, and the display data can be used to render the virtual object.
  • the content data of the virtual object such as vertex coordinates, color, etc.
  • Step S290 Send the display data and the pose data to the second device, and the display data and the pose data are used by the second device to synchronously display the virtual object.
  • the first device may send the display data and the pose data to the second device.
  • the second device can determine the position of the virtual object in its corresponding spatial coordinate system according to the pose data sent by the first terminal, and display the virtual object after rendering the virtual object according to the display data.
  • the virtual objects generated and displayed by the first device are synchronized to the second device for display.
  • the first device may also receive the display data and pose data of the target virtual object transmitted by the second device.
  • the target virtual object is a virtual object generated and displayed by the second device after relocation.
  • the target object The pose data of the second device can be the aligned space coordinate system of the second device and converted to the space coordinate system corresponding to the first device, so that the first device can directly determine the target virtual object’s position and pose data based on the target virtual object’s pose data.
  • the position is displayed, the target virtual object is generated, and the target virtual object is displayed, so that the virtual object generated by the first device and the virtual object generated by the second device are simultaneously displayed.
  • Figures 5 and 6 at the same time.
  • the first device displays the virtual game character A
  • it receives the display data of the virtual game character B generated and displayed by the second device and Pose data, generate virtual game character B, and display virtual game character B, realize that the virtual game character B added by the second device is synchronized to the first device for display, and can control virtual game character A and virtual game character B to perform Action interaction such as fighting, to achieve the interaction of fighting games.
  • the first device may also detect the operation data of the added virtual object, respond to the operation data, and synchronize the response result to the second device. For example, after moving the added virtual object, the new pose data can be synchronized to the second device.
  • an image containing a first target is acquired.
  • the first target is in a first real scene where the first device is located, and the first target includes
  • the pattern generated by the scene recognizes the feature information in the pattern and obtains the pose data of the first device, constructs a map based on the feature information and the pose data, obtains the map data, and then sends the map data to the second device.
  • the data is used to instruct the second device to relocate according to the map data and the second target, and the first target and the second target have at least two identical feature points, and the second target is in the second device where the second device is located.
  • the first reality scene is different from the second reality scene.
  • the virtual object is generated and displayed, and then the pose data and display data of the virtual object are transmitted to the second device, so that the second device can perform the relocation of the virtual object.
  • the objects are displayed synchronously, realizing the simultaneous display in multi-person AR.
  • FIG. 7 shows a schematic flowchart of a data processing method provided by another embodiment of the present application.
  • the data processing method is applied to the second device in the above-mentioned data processing system, and the data processing system further includes the first device.
  • the flow shown in FIG. 7 will be described in detail below, and the data processing method may specifically include the following steps:
  • Step S310 Acquire map data transmitted by the first device, where the map data is constructed by the first device according to the acquired image containing the first target object corresponding to the first real scene where the first device is located The map of is obtained, and the first target is in the first real scene.
  • the map data transmitted by the first device may be transmitted to the server, and then the second device may receive the map data transmitted by the server. Therefore, when the first device is far away from the second device, a remote interaction solution in multi-person AR can be implemented.
  • Step S320 Acquire an image containing a second target object, the second target object is in a second real scene where the second device is located, and the first target object and the second target object have at least two With the same feature point, the second real scene is different from the first real scene.
  • the second device after receiving the map data, can obtain an image containing the second target object in the second real scene where it is located during relocation, so as to follow the map data and include The image of the second target is repositioned.
  • the first target and the second target have at least two identical characteristic points.
  • the first target can be the same as the second target, that is, all the feature points are the same.
  • the first device can recognize the feature points obtained by the first target and construct a map based on the feature points, and the second device can pass The feature points of the second target are recognized, and the map is recognized, and the content of the map constructed by the first device is the same as the content of the map recognized by the second device, so that the second device can subsequently identify the map according to the second target, and the first A map constructed by a device for relocation.
  • the first target and the second target have partially identical feature points, and the number of the partially identical feature points is at least two, and the map content corresponding to other different feature points of the first target may be the same as that of the second target.
  • the map content corresponding to the other different feature points of the target object corresponds to the map content. Therefore, the map content constructed by the first device according to the first target object and the map content recognized by the second device according to the second target object are partially identical. Map content, while other different map content corresponds to each other, the second device can also implement relocation based on the same map content and the map content corresponding to each other.
  • the first target and the second target may both be textures, and the patterns on the textures are generated in advance according to the first real scene where the first device is located, so as to facilitate the use of map data for reconstruction. Positioning.
  • the second device when the second device acquires an image containing the second target object, it may also acquire an image containing only the second target object, so as to improve the probability of successful relocation.
  • the second device when acquiring an image containing the second target object, can also detect the brightness of the ambient light in the second real scene. If the brightness of the ambient light is lower than the preset brightness, the light supplement module can also be used (For example, fill light, etc.) to fill light, etc., to improve the efficiency and quality of mapping.
  • Step S330 Perform relocation according to the map data and the image containing the second target.
  • the second device may perform relocation according to the map data and the image containing the second target object.
  • the second device may recognize the characteristic information of the second target, for example, the characteristic information in the pattern of the texture.
  • the second device can also determine the characteristic information of the first target according to the map data, and perform the comparison between the first target and the second target according to the characteristic information of the first target and the characteristic information of the second target. Matching, if the similarity between the two is greater than the set similarity, then according to the map data and the image containing the second target object, determine the positional relationship between the first device and the second device, and compare the space corresponding to the first device
  • the coordinate system is aligned with the spatial coordinate system corresponding to the second device to complete the relocation.
  • the first device and the second device can subsequently display and interact with virtual content according to the aligned spatial coordinate system.
  • the second device can recognize the feature information according to the image containing the second target, and determine the map content corresponding to the feature information, and then obtain the pose data of the second device, and then according to the first
  • the pose data of the second device determines the location of each map content in the spatial coordinate system corresponding to the second device, and determines the currently recognized map based on each map content and its corresponding location. Since the first target and the second target are the same, or their corresponding map content is corresponding, the second device can perform the comparison between the map constructed by the first device and the map recognized by the second target.
  • the position relationship between the first device and the second device can be analyzed, and then according to the position relationship, the conversion relationship between the space coordinate system corresponding to the first device and the space coordinate system corresponding to the second device can be determined, thereby The alignment of the coordinate system is realized, and the relocation is completed. After the relocation is completed, the first device and the second device can use the aligned spatial coordinate system to synchronize the virtual content.
  • the spatial coordinate system corresponding to the first device may be the world coordinate system with the first device as the origin, or the camera coordinate system of the first device, which is not limited here.
  • the spatial coordinate system corresponding to the second device may be the world coordinate system with the second device as the origin, or the camera coordinate system of the second device, which is not limited here.
  • the second device obtains the transmitted map data, and the map data is the first device according to the acquired image containing the first target to construct the first reality in which the first device is located.
  • the map corresponding to the scene is obtained.
  • the first target is in the first real scene, and then an image containing the second target is obtained.
  • the second target is in the second real scene where the second device is located.
  • the target object has a corresponding relationship, and then repositioning is performed according to the map data and the image containing the second target object, so that the repositioning can be realized according to the target object in different real scenes, which facilitates the realization of the multi-person AR solution and improves the user Experience.
  • FIG. 8 shows a schematic flowchart of a data processing method provided by another embodiment of the present application.
  • the data processing method is applied to the second device in the above-mentioned data processing system, and the data processing system further includes the first device.
  • the flow shown in FIG. 8 will be described in detail below, and the data processing method may specifically include the following steps:
  • Step S410 Acquire map data transmitted by the first device, where the map data is constructed by the first device according to the acquired image containing the first target object corresponding to the first real scene where the first device is located The map of is obtained, and the first target is in the first real scene.
  • Step S420 Acquire an image containing a second target object, the second target object is in a second real scene where the second device is located, and the first target object and the second target object have at least two With the same feature points as above, the second real scene is different from the first real scene.
  • step S410 and step S420 can refer to the content of the foregoing embodiment, and will not be repeated here.
  • Step S430 According to the image including the second target and the map data, determine the first pose data of the second terminal in a first spatial coordinate system, where the first spatial coordinate system is the first The space coordinate system corresponding to a device.
  • the second device can determine the map content corresponding to the feature information in the image and the pose data of the second device according to the image containing the second target object, and determine the map content corresponding to the second device. Based on the location in the spatial coordinate system of each map and its corresponding location, the currently recognized map is determined. The second device can perform feature matching according to the recognized map and the received map, extract the same image features of the two respectively, and output the point matching set, and then estimate the first spatial coordinate system of the second device relative to the first device
  • the pose estimation algorithm may be a PNP algorithm, and the specific pose estimation algorithm may not be limited.
  • the spatial coordinate system corresponding to the first device may be the world coordinate system with the first device as the origin, or the camera coordinate system of the first device, which is not limited here.
  • the identified map can be directly matched with the received map, so as to subsequently determine according to the matching result
  • the pose data of the second device in the first spatial coordinate system can be directly matched with the received map, so as to subsequently determine according to the matching result.
  • the map content corresponding to the different feature points between the first target and the second target is the same. corresponding. Then, among the identified map content, part of the map content can be matched with the received map content, while the rest of the map content can be matched with the map content corresponding to the different feature points of the first target object, so as to subsequently determine the first target object according to the matching result.
  • the pose data of the second device in the first spatial coordinate system is the same.
  • Step S440 Acquire second pose data of the second terminal in a second spatial coordinate system, where the second spatial coordinate system is a spatial coordinate system corresponding to the second device.
  • the second terminal may determine the second pose data of the second terminal in the second spatial coordinate system according to the acquired image containing the second target object and the data collected by the IMU.
  • the spatial coordinate system corresponding to the second device may be the world coordinate system with the second device as the origin, or the camera coordinate system of the second device, which is not limited here.
  • Step S450 Obtain a coordinate system conversion relationship between the first spatial coordinate system and the second spatial coordinate system according to the first pose data and the second pose data.
  • the second device may obtain coordinate system transformation data between the first spatial coordinate system and the second spatial coordinate system according to the first pose data and the second pose data, for example, obtain a coordinate system transformation matrix, etc. , And use the coordinate system transformation data as the coordinate system transformation relationship between the first space coordinate system and the second space coordinate system. Subsequently, the coordinate system transformation data can be used to realize the transformation of the pose data of the virtual object generated by the first device, so as to synchronously display the virtual object generated and displayed by the first device.
  • the second device may also generate instruction information and transmit the instruction information to the first device.
  • the instruction information is used to indicate that the relocation of the second device is successful, so that the first device can relocate successfully.
  • the user may know that a virtual object can be added, or the first device can generate a preset virtual object.
  • Step S460 Obtain display data and third pose data of the virtual object displayed by the first device.
  • the second device may correspondingly receive the display data and the third pose data.
  • Step S470 Convert the third pose data into the fourth pose data in the second spatial coordinate system according to the coordinate system conversion relationship.
  • the second device can convert the third pose data into the fourth pose data in the second space coordinate system according to the coordinate system conversion relationship obtained above, thereby realizing the pose data of the virtual object Conversion from the first space coordinate system corresponding to the first device to the second space coordinate system corresponding to the second device.
  • Step S480 superimpose the virtual object in the second real scene according to the display data and the fourth pose data.
  • the second device can determine the position of the virtual object in the second spatial coordinate system according to the fourth pose data obtained after the conversion, and generate the virtual object according to the position and the display data to convert the virtual object The object is superimposed on the second real scene.
  • the second real scene For a specific way of superimposing the virtual object on the second real scene, please refer to the way of superimposing the virtual object on the first real scene by the first device in the foregoing embodiment, which will not be repeated here.
  • the second device after the second device is relocated, it may also generate and display the target virtual object according to a user's adding operation or according to a preset rule.
  • a user's adding operation For example, referring to Figure 9, after adding the virtual character B, the user corresponding to the second device can observe the virtual character B in the first real scene through the display lens, and can also observe the current field of view in the second real scene The second target 11 inside realizes the display effect of augmented reality.
  • the second device has received the virtual object and display data generated by the first device after generating and displaying the target virtual object, the virtual object generated and displayed by the first device can be displayed simultaneously.
  • the second device displays the virtual game character B
  • it receives the display data and pose data of the virtual game character A generated and displayed by the first device, generates the virtual game character A, and Display the virtual game character A to realize the synchronization of the virtual game character A added by the first device to the second device for display, and can control the virtual game character B to interact with the virtual game character A for fighting and other actions to realize the interaction of fighting games .
  • the virtual object may also be superimposed on the location area where the first target object and the second target object are located.
  • the content displayed on the display screen may be an image of the virtual object superimposed on the first target object, or an image of the virtual object superimposed on the second target object.
  • the display screen of the mobile terminal may only display contents such as chessboards and chess pieces superimposed on the first target or the second target to better realize the game display effect.
  • the first device and the second device are both mobile terminals. After the first device constructs a map according to the first target, and the second device relocates according to the second target and the constructed map, the first device can display the overlay On the chessboard of the first target, the second device can also display the chessboard superimposed on the second target simultaneously.
  • the first device adds a "black chess piece" to the chessboard, the first device can change the added "black chess piece" to the chessboard.
  • the display data and the pose data are sent to the second device, and the second device displays the "black chess piece” at the same position on the chessboard according to the display data and the pose data.
  • the second device adds a "white chess piece” to the chessboard
  • the second device can send the display data and pose data of the added "white chess piece” to the first device
  • the first device can send the display data and pose data of the added "white chess piece” to the first device.
  • Display data and pose data and display the "white piece” at the same position on the chessboard.
  • the second device can also display the game prompt content synchronously.
  • the second device can send the display data and pose data of the target virtual object to the first device, so that the virtual object generated and displayed by the second device can be synchronized to the first device for display.
  • the pose data of the target virtual object can be converted into the first space coordinate system according to the coordinate system conversion relationship.
  • the first device can generate the target virtual object according to the pose data sent by the second device.
  • the second device may also detect the operation data of the added virtual object, respond to the operation data, and synchronize the response result to the first device. For example, after moving the added virtual object, the new pose data can be synchronized to the first device. For another example, referring to FIG. 11, in the above-mentioned example of a fighting game scene, the virtual characters can be switched to enrich the gameplay of the fighting game.
  • the data processing method provided by the embodiment of the application obtains the transmitted map data, and the map data is the first device according to the acquired image containing the first target object to construct the corresponding first real scene where the first device is located.
  • the map is obtained, the first target is in the first real scene, and then an image containing the second target is obtained.
  • the second target is in the second real scene of the second device.
  • the first target and the second target have Correspondence relationship, and then determine the first pose data of the second device in the first spatial coordinate system corresponding to the first device according to the image and map data containing the second target, and then obtain the second device in its corresponding second pose data
  • the second pose data in the spatial coordinate system determines the coordinate system conversion relationship between the first spatial coordinate system and the second spatial coordinate system according to the first pose data and the second pose data, thereby realizing relocation.
  • the virtual object can be synchronized according to the coordinate system conversion relationship, so as to realize the synchronized display and interaction in the multi-person AR.
  • FIG. 12 shows a schematic flowchart of a data processing method provided by yet another embodiment of the present application.
  • the data processing method is applied to a data processing system.
  • the data processing system includes a first device, a second device, and a server, and both the first device and the second device are in communication connection with the server.
  • the flow shown in FIG. 12 will be described in detail below, and the data processing method may specifically include the following steps:
  • Step S501 construct a map.
  • the manner in which the first device constructs the map can refer to the content of the foregoing embodiment, which will not be repeated here.
  • Step S502 Upload the map to the server.
  • the first device after the first device constructs the map, it can upload the map to the server.
  • Step S503 Transmit the map data to the second device.
  • the server may transmit the map data to the second device.
  • Step S504 Perform relocation.
  • the second device may perform relocation based on the map data and the image containing the second target object.
  • relocation method please refer to the content of the foregoing embodiment, which will not be repeated here.
  • Step S505 the relocation is successful.
  • Step S506 Send the instruction information to the server.
  • the indication information is used to indicate that the relocation of the second device is successful.
  • Step S507 Transmit the instruction information to the first device.
  • the server after the server receives the instruction information, it can transmit the instruction information to the first device, so that the first device knows that the relocation of the second device is successful.
  • Step S508 receiving the instruction information.
  • the first device can receive the indication information.
  • Step S509 Add a virtual object.
  • the first device and the second device may each add virtual objects to the virtual scene.
  • Step S510 Transmit the added virtual object.
  • the first device may transmit the display data and pose data of the added virtual object to the server, and the server may transmit the display data and pose data of the virtual object added by the first device Forward to the second device, so that the second device synchronously displays the virtual objects added by the first device.
  • the second device adds the virtual object, it can transmit the display data and pose data of the added virtual object to the server, and the server can forward the display data and pose data of the virtual object added by the second device to the second device.
  • a device so that the first device synchronously displays the virtual objects added by the second device. Therefore, after the added virtual objects are displayed synchronously with each other, the synchronous display of the virtual objects can be realized, and subsequent interactions can be performed according to the displayed virtual objects.
  • Step S511 Operate the virtual object.
  • the first device may operate the virtual object corresponding to the first device according to the operation of the user.
  • the second device can operate the virtual object corresponding to the second device according to the user's operation.
  • Step S512 Transmit the current pose data of the virtual object.
  • the pose of the virtual object may change. Therefore, the current pose data of the virtual object can be transmitted to synchronize the pose change of the virtual object.
  • FIG. 13 shows a structural block diagram of a data processing apparatus 400 according to an embodiment of the present application.
  • the data processing apparatus 400 uses the first device in the aforementioned data processing system, and the data processing system further includes a second device.
  • the data processing device 400 includes: a first image acquisition module 410, a map construction module 420, and a map transmission module 430.
  • the first image acquisition module 410 is configured to acquire an image containing a first target object, and the first target object is in the first real scene where the first device is located;
  • the map construction module 420 is configured to According to the image, construct a map corresponding to the first real scene to obtain map data;
  • the map transmission module 430 is used to transmit the map data to the second device, and the map data is used to indicate The second device performs relocation according to the map data and a second target.
  • the first target and the second target have at least two identical feature points, and the second target is located in the In the second real scene where the second device is located, the second real scene is different from the first real scene.
  • the data processing apparatus 400 may further include: a content overlay module, configured to, after sending the map to the second device, when receiving the instruction information sent by the second device , Controlling the first device to superimpose a virtual object in the first real scene, wherein the indication information is used to indicate that the relocation of the second device is successful; a content display module is used to display the virtual object.
  • a content overlay module configured to, after sending the map to the second device, when receiving the instruction information sent by the second device , Controlling the first device to superimpose a virtual object in the first real scene, wherein the indication information is used to indicate that the relocation of the second device is successful
  • a content display module is used to display the virtual object.
  • the content superposition module may be specifically used to: obtain a scene image of the first real scene; obtain a superposition position of a virtual object that needs to be superimposed on the first real scene; and determine the superposition position according to the superposition position.
  • the pixel coordinates of the virtual object; according to the pixel coordinates, the virtual object and the scene image are synthesized to obtain a synthesized image.
  • the content display module may be specifically used to display the composite image.
  • the content superimposition module may be specifically configured to: obtain the superimposition position of a virtual object that needs to be superimposed on the first real scene; and generate the virtual object according to the superimposition position and content data of the virtual object.
  • acquiring the superimposition position of the virtual object that needs to be superimposed on the first real scene by the content generation module may include: according to the detected adding operation, the virtual object needs to be superimposed on the superimposition position of the first real scene.
  • the data processing device 400 may further include: a data acquisition module for acquiring the pose data and display data of the virtual object after the virtual object is generated and displayed; a data sending module for The display data and the pose data are sent to the second device, and the display data and the pose data are used by the second device to synchronously display the virtual object.
  • the map construction module 420 may include: a pattern recognition unit, configured to recognize the pattern in the image, and obtain characteristic information in the pattern, where the characteristic information corresponds to the first real scene A data acquisition unit for acquiring the pose data of the first device; a map generating unit for generating a map corresponding to the first real scene according to the feature information and the pose data to obtain a map data.
  • the map transmission module may be specifically configured to transmit the map data to a server, and the server is configured to transmit the map data to the second device.
  • FIG. 14 shows a structural block diagram of a data processing apparatus 500 provided by another embodiment of the present application.
  • the data processing apparatus 500 uses the second device in the aforementioned data processing system, and the data processing system further includes the first device.
  • the data processing device 500 includes a map acquisition module 510, a second image acquisition module 520, and a relocation module 530.
  • the map acquisition module 510 is configured to acquire map data transmitted by the first device, and the map data is constructed by the first device according to the acquired image containing the first target object.
  • the map corresponding to the first real scene at the location is obtained, and the first target is in the first real scene;
  • the second image acquisition module 520 is configured to acquire an image containing a second target, and the second target In the second reality scene where the second device is located, the first target object and the second target object have at least two or more identical feature points, and the second reality scene is the same as the first reality.
  • the scene is different;
  • the relocation module 530 is used to perform relocation according to the map data and the image.
  • the relocation module 530 may include: a first pose determining unit, configured to determine that the second terminal is in the first spatial coordinate system according to the image containing the second target and the map data.
  • the first space coordinate system is the space coordinate system corresponding to the first device;
  • the second pose determination unit is used to obtain the position of the second terminal in the second space coordinate system
  • the second pose data, the second spatial coordinate system is the spatial coordinate system corresponding to the second device;
  • the relationship determination unit is configured to obtain all the data according to the first pose data and the second pose data The coordinate system conversion relationship between the first space coordinate system and the second space coordinate system.
  • the data processing device 500 may further include: a pose data acquisition module for acquiring the display data of the virtual object displayed by the first device and the third pose data; a pose data conversion module for According to the coordinate system conversion relationship, the third pose data is converted into the fourth pose data in the second space coordinate system; a virtual object superimposing module is used to convert the third pose data into the fourth pose data according to the display data and the first Four pose data, superimposing the virtual object in the second real scene.
  • a pose data acquisition module for acquiring the display data of the virtual object displayed by the first device and the third pose data
  • a pose data conversion module for According to the coordinate system conversion relationship, the third pose data is converted into the fourth pose data in the second space coordinate system
  • a virtual object superimposing module is used to convert the third pose data into the fourth pose data according to the display data and the first Four pose data, superimposing the virtual object in the second real scene.
  • the data processing apparatus 500 may further include: an indication information generating module.
  • the instruction information generating module is used to generate instruction information after the relocation is performed according to the map data and the image, and transmit the instruction information to the first device, and the instruction information is used to instruct the The relocation of the second device was successful.
  • the coupling between the modules may be electrical, mechanical or other forms of coupling.
  • the functional modules in the various embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules.
  • the embodiment of the present application also provides a data processing system.
  • the data processing system includes a first device and a second device.
  • the first device and the second device can perform data transmission.
  • the first device is configured to: obtain an image containing a first target, the first target being in a first real scene where the first device is located; according to the image, to construct an image with the first reality A map corresponding to the scene, map data is obtained; the map data is transmitted to the second device, and the map data is used to instruct the second device to relocate according to the map data and a second target object, the The first target has a corresponding relationship with the second target, and the second target is in a second real scene where the second device is located.
  • the second device is configured to: acquire map data transmitted by the first device, where the map data is the first device where the first device is located according to the acquired image containing the first target object. Obtain a map corresponding to a real scene, the first target is in the first real scene; acquire an image containing a second target, the second target is in a second real scene where the second device is , The first target and the second target have a corresponding relationship; repositioning is performed according to the map data and the image containing the second target.
  • the solution provided by this application acquires an image containing a first target object, and the first target object is located in the first real scene where the first device is located. According to the image, constructs corresponding to the first real scene The map data is obtained, and then the map data is sent to the second device. The map data is used to instruct the second device to relocate according to the map data and the second target, and the first target and the second target have at least Two or more of the same feature points, the second target is in the second real scene where the second device is located, and the second display scene is different from the first real scene, so that it can be based on the corresponding target in different real scenes. , Realize relocation, facilitate the realization of multi-person AR solution in augmented reality, and improve user experience.
  • the electronic device 100 may be an electronic device capable of running applications, such as a smart phone, a tablet computer, a smart watch, a head-mounted display device, and the like.
  • the electronic device 100 in this application may include one or more of the following components: a processor 110, a memory 120, and one or more application programs, where one or more application programs may be stored in the memory 120 and configured to be configured by One or more processors 110 execute, and one or more programs are configured to execute the methods described in the foregoing method embodiments.
  • the processor 110 may include one or more processing cores.
  • the processor 110 uses various interfaces and lines to connect various parts of the entire electronic device 100, and executes by running or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and calling data stored in the memory 120.
  • Various functions and processing data of the electronic device 100 may use at least one of digital signal processing (Digital Signal Processing, DSP), Field-Programmable Gate Array (Field-Programmable Gate Array, FPGA), and Programmable Logic Array (Programmable Logic Array, PLA).
  • DSP Digital Signal Processing
  • FPGA Field-Programmable Gate Array
  • PLA Programmable Logic Array
  • the processor 110 may be integrated with one or a combination of a central processing unit (CPU), a graphics processing unit (GPU), a modem, and the like.
  • the CPU mainly processes the operating system, user interface, and application programs; the GPU is used for rendering and drawing of display content; the modem is used for processing wireless communication. It can be understood that the above-mentioned modem may not be integrated into the processor 110, but may be implemented by a communication chip alone.
  • the memory 120 may include random access memory (RAM) or read-only memory (Read-Only Memory).
  • the memory 120 may be used to store instructions, programs, codes, code sets or instruction sets.
  • the memory 120 may include a program storage area and a data storage area, where the program storage area may store instructions for implementing an operating system and instructions for implementing at least one function (such as touch function, sound playback function, image playback function, etc.) , Instructions used to implement the following various method embodiments, etc.
  • the storage data area can also store data (such as phone book, audio and video data, chat record data) created by the electronic device 100 during use.
  • FIG. 16 shows a structural block diagram of a computer-readable storage medium provided by an embodiment of the present application.
  • the computer-readable medium 800 stores program code, and the program code can be invoked by a processor to execute the method described in the foregoing method embodiment.
  • the computer-readable storage medium 800 may be an electronic memory such as flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), EPROM, hard disk, or ROM.
  • the computer-readable storage medium 800 includes a non-transitory computer-readable storage medium.
  • the computer-readable storage medium 800 has storage space for the program code 810 for executing any method steps in the above-mentioned methods. These program codes can be read from or written into one or more computer program products.
  • the program code 810 may be compressed in a suitable form, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Software Systems (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请公开了一种数据处理方法、装置、电子设备及存储介质,该数据处理方法应用于第一设备,所述方法包括:获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中;根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据;将所述地图数据传输至所述第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,第二现实场景与第一现实场景不同。本方法可以实现不同现实场景下的重定位,便于实现增强现实中的多人交互。

Description

数据处理方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求于2019年11月27日提交的申请号为201911184981.9的中国申请的优先权,其在此出于所有目的通过引用将其全部内容并入本文。
技术领域
本申请涉及显示技术领域,更具体地,涉及一种数据处理方法、装置、电子设备及存储介质。
背景技术
随着科技的进步,增强现实(AR,Augmented Reality)等技术已逐渐成为国内外研究的热点。增强现实是通过计算机系统提供的信息增加用户对现实世界感知的技术,已经广泛应用到教育、游戏、医疗等各个领域中,随之,多人AR的方案也开始出现。在传统的多人AR的方案中,利用重定位将各自的虚拟物体显示在对方的虚拟场景中,但是通常需要设备在相同的场景中才能完成重定位,使得重定位的难度较高。
发明内容
鉴于上述问题,本申请提出了一种数据处理方法、装置、电子设备及存储介质。
第一方面,本申请实施例提供了一种数据处理方法,应用于第一设备,所述方法包括:获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中;根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据;将所述地图数据传输至第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第二现实场景与所述第一现实场景不同。。
第二方面,本申请实施例提供了一种数据处理方法,应用于第二设备,所述方法包括:获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景;获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二现实场景与所述第一现实场景不同;根据所述地图数据以及所述包含第二目标物的图像进行重定位。
第三方面,本申请实施例提供了一种数据处理装置,应用于第一设备,所述装置包括:第一图像获取模块、地图构建模块以及地图传输模块,其中,所述第一图像获取模块用于获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中;所述地图构建模块用于根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据;所述地图传输模块用于将所述地图数据传输至所述第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第二现实场景与所述第一现实场景不同。
第四方面,本申请实施例提供了一种数据处理装置,应用于第二设备,所述装置包括:地图获取模块、第二图像获取模块以及重定位模块,其中,所述地图获取模块用于获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景;所述第二图像获取模块用于获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二现实场景与所述第一现实场景不同;所述重定位模块用于根据所述地图数据以及所述图像进行重定位。
第五方面,本申请实施例提供了一种电子设备,包括:一个或多个处理器;存储器;一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行上述第一方面提供的数据处理方法、或者执行上述第二方面提供的数据处理方法。
第六方面,本申请实施例提供了一种计算机可读取存储介质,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行上述第一方面提供的数据处理方法、或者执行上述第二方面提供的数据处理方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出了适用于本申请实施例的应用场景的一种示意图。
图2示出了适用于本申请实施例的应用场景的另一种示意图。
图3示出了根据本申请一个实施例的数据处理方法流程图。
图4示出了根据本申请另一个实施例的数据处理方法流程图。
图5示出了本申请另一个实施例提供的一种显示效果示意图。
图6示出了本申请另一个实施例提供的另一种显示效果示意图。
图7示出了根据本申请又一个实施例的数据处理方法流程图。
图8示出了根据本申请再一个实施例的数据处理方法流程图。
图9示出了本申请再一个实施例提供的一种显示效果示意图。
图10示出了本申请再一个实施例提供的另一种显示效果示意图。
图11示出了本申请再一个实施例提供的又一种显示效果示意图。
图12示出了根据本申请又另一个实施例的数据处理方法流程图。
图13示出了根据本申请一个实施例的数据处理装置的一种框图。
图14示出了根据本申请另一个实施例的数据处理装置的一种框图。
图15是本申请实施例的用于执行根据本申请实施例的数据处理方法的电子设备的框图。
图16是本申请实施例的用于保存或者携带实现根据本申请实施例的数据处理方法的程序代码的存储单元。
具体实施方式
为了使本技术领域的人员更好地理解本申请方案,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。
增强现实(AR,Augmented Reality)是通过计算机系统提供的信息增加用户对现实世界感知的技术,其将计算机生成的虚拟对象、场景或系统提示信息等内容对象叠加到真实场景中,来增强或修改对现实世界环境或表示现实世界环境的数据的感知。
在传统的多人AR的技术方案中,一般分为主机和从机,主机和从机基于同步定位与地图构建(simultaneous localization and mapping,slam)技术添加和显示各自的虚拟物体,然后利用重定位将各自的虚拟物体显示在对方的虚拟场景中,并且可以各自操作虚拟物体进行互动,例如,两个游戏模型进行比赛。其中,重定位主要是使得主机与从机之间相互了解对方的位置,也就是需要共享一个共同的坐标系,坐标系可以是世界坐标系,也可以是主机的坐标系。
发明人经过长时间的研究发现,传统的多人AR的技术方案中,虽然能够通过重定位将各个设备间的虚拟物体同步到对方的场景中,但一般需要两台设备在同一地方且相似的角度进行重定位操作,才能完成重定位操作,这会使得用户无使用经验或没有指导的情况下很难完成重定位,造成用户体验不佳。
针对上述问题,发明人提出了本申请实施例提供的数据处理方法、装置、电子设备以及存储介质,可以根据不同现实场景中具有对应关系的目标物,实现重定位,方便增强现实中多人AR方案的实现,提升用户体验。其中,具体的数据处理方法在后续的实施例中进行详细的说明。
下面对本申请实施例提供的数据处理方法的应用场景进行介绍。
请参阅图1,示出了本申请实施例提供的数据处理方法的应用场景的一种示意图,该应用场景包括数据处理系统10,数据处理系统10可以用于多人AR的场景,该数据处理系统10可以包括多个电子设备,例如图1中示例性地示出了第一设备100以及第二设备200。
在一些实施方式中,电子设备可以是头戴显示装置,也可以是手机、平板电脑等移动设备。电子设备为头戴显示装置时,头戴显示装置可以为一体式头戴显示装置。电子设备也可以是与外接式/接入式头戴显示装置连接的手机等智能终端,即电子设备可作为头戴显示装置的处理和存储设备,插入或者接入外接式头戴显示装置,在头戴显示装置中对虚拟内容进行显示。电子设备也可以是单独的手机等移动终端,移动终端可以生成虚拟场景并于屏幕中进行显示。
在一些实施方式中,不同的电子设备可以处于不同的现实场景中,且电子设备之间可相互通信。每个电子设备所处的现实场景可以设置有目标物,目标物可以用于电子设备进行构建地图或者重定位,不同电子设备所处现实场景的目标物可以相同,也可以相互之间进行绑定。例如,图1中的第一设备100所处的现实场景可以设置有第一目标物,第一设备100可以扫描第一目标物并构建地图,在构建地图后,可以将地图数据发送至第二设备200,第二设备200可以根据其所处现实场景中的第二目标物进行重定位,后续第一设备100以及第二设备200则可以根据重定位的结果进行虚拟内容的同步。
请参阅图2,示出了本申请实施例提供的数据处理方法的应用场景的另一种示意图,该应用场景包括数据处理系统10,数据处理系统10可以用于多人AR的场景,该数据处理系统10可以包括多个电子设备以及服务器,多个电子设备与服务器可以通信。例如图2中示例性地示出了第一设备100、第二设备200以及服务器300。服务器300可以为传统服务器,也可以为云服务器等。
在一些实施方式中,电子设备之间可以通过服务器进行数据传输,也就是电子设备可以将数据传输至服务器,再由服务器将数据传输至其他电子设备。例如,第一设备100可以根据扫描第一目标物,构建地图,并将地图数据发送至服务器300,之后再由服务器300将地图数据传输至第二设备200,第二设备200可以据其所处现实场景中的第二目标物进行重定位,后续第一设备100以及第二设备200则可以根据重定位的结果进行虚拟内容的同步。
下面结合附图在实施例中对具体的数据处理方法进行介绍。
请参阅图3,图3示出了本申请一个实施例提供的数据处理方法的流程示意图。该数据处理方法应用于上述数据处理系统中的第一设备,数据处理系统还包括第二设备。下面将针对图3所示的流程进行详细的阐述,该数据处理方法具体可以包括以下步骤:
步骤S110:获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中。
在本申请实施例中,第一设备所处的现实场景中可以设置有第一目标物,第一目标物可以为具有一定纹理特征的实体对象。第一目标物用于第一设备对第一目标物进行扫描,以构建地图。
在一些实施方式中,第一目标物上可以包括具有设定的纹理的图案,以便第一设备可以对图像中的图案进行识别,从而识别出图案中的特征点,进而根据特征点构建地图。第一目标物的具体形状和大小可以不作为限定,例如目标物的轮廓可以为矩形,大小可以为1平方米,当然,目标物的形状也可以是其他形状,大小也可以为其他大小。
在一些实施方式中,第一设备可以包括图像采集装置,图像采集装置用于采集现实场景的图像等。图像采集装置可以为红外相机,也可以是可见光相机,图像采集装置的具体类型在本申请实施例中并不作为限定。第一设备可以通过该图像采集装置,对第一目标物进行图像采集,以获得包含第一目标物的图像。
在另一些实施方式中,也可以设置图像采集设备与设备连接,从而设备可以通过外接的图像采集设备对第一目标物进行图像采集,获得包含第一目标物的图像。在该方式中,图像采集设备还可以与设备的位姿保持一致,以便设备可以根据外接的图像采集设备采集的图像,进行位姿的识别等。
步骤S120:根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据。
在本申请实施例中,第一设备在获得包含第一目标物的图像之后,则可以根据该包含第一目标物的图像,构建地图。其中,第一设备还可以获取第一设备获取该图像时的位姿数据,并根据第一设备的位姿数据以及该包含第一目标物的图像,构建地图。
在一些实施方式中,第一设备可以识别第一目标物的图像中的特征点。其中,第一目标物的特征点的信息可以预先存储与第一设备,并且这些特征点与地图的内容相关联。第一设备可以根据预先存储的特征点的特征信息,对第一目标物的特征点进行识别,以识别出第一目标物的各个特征点。第一设备在识别获得第一目标物的各个特征点之后,则可以根据各个特征点,确定对应的地图内容。然后第一设备根据第一设备的位姿数据,确定各个地图内容于第一设备所对应的空间坐标系中的位置,根据各个地图内容及其对应的位置,构建出地图。构建出的地图可以作为与第一设备所处的第一现实场景所对应的地图。可以理解的,构建地图,也可以获得地图的内容数据、各个内容的位置、设备的位姿数据、第一目标物的各个特征点的信息等,这些数据可以被携带于地图中,当然,地图中具体的数据可以不作为限定。
步骤S130:将所述地图数据传输至所述第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第二现实场景与所述 第一现实场景不同。
在本申请实施例中,第一设备在根据包含第一目标物的图像,生成地图得到地图数据以后,第一设备可以将地图数据传输至第二设备。第二设备在获得地图数据之后,则可以根据地图数据以及第二设备所处的第二现实场景中的第二目标物,进行重定位,以获得第一设备与第二设备之间的位置关系,实现第一设备对应的空间坐标系与第二设备对应的空间坐标系的对齐,从而第一设备以及第二设备可以根据对齐后的空间坐标系,进行虚拟内容的显示和交互。
其中,第一目标物与第二目标物具有至少两个相同的特征点。例如,第一目标物可以与第二目标物相同,也就是所有特征点是相同的,第一设备可以通过第一目标物识别得到的特征点,并根据特征点构建地图,第二设备可以通过第二目标物识别的特征点,识别出地图,并且第一设备构建的地图内容与第二设备识别出的地图内容相同,从而第二设备后续可以根据第二目标物识别出的地图,以及第一设备构建的地图,进行重定位。又例如,第一目标物与第二目标物具有部分相同的特征点,且部分相同的特征点的数量为至少两个,第一目标物的其他不同的特征点对应的地图内容可以与第二目标物的其他不同的特征点对应的地图内容对应,因此,第一设备根据第一目标物构建出的地图内容与第二设备根据第二目标物识别出的地图内容之间,具有部分相同的地图内容,而其他不同的地图内容之间相互对应,第二设备根据相同的地图内容以及相互对应的地图内容,也可以实现重定位。第二设备进行重定位的操作在后续的实施例进行介绍。
本申请实施例提供的数据处理方法,通过获取包含第一目标物的图像,该第一目标物处于第一设备所处的第一现实场景中,根据该图像,构建与第一现实场景对应的地图,获得地图数据,然后将地图数据发送至第二设备,该地图数据用于指示第二设备根据地图数据以及第二目标物进行重定位,且第一目标物与第二目标物具有至少两个相同的特征点,第二目标物处于第二设备所处的第二现实场景中,并且第一现实场景与第二现实场景不同,从而可以根据不同现实场景中的目标物,实现重定位,进而便于了多人AR方案的实现,提升了用户体验。
请参阅图4,图4示出了本申请另一个实施例提供的数据处理方法的流程示意图。该数据处理方法应用于上述数据处理系统中的第一设备,数据处理系统还包括第二设备。下面将针对图4所示的流程进行详细的阐述,该数据处理方法具体可以包括以下步骤:
步骤S210:获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中,所述第一目标物上包括预先根据所述第一现实场景生成的图案。
在本申请实施例中,第一目标物上可以包括预先生成的图案,该图案可以预先根据第一现实场景生成,也就是说,该图案中的特征点与第一现实场景对应。具体地,该图案可以是具有纹理特征的图案,图案中的多个特征点可以与第一现实场景中的实体内容相对应,从而后续第一设备根据包含第一目标物的图像,构建的地图可以与第一现实场景相对应。
在一些实施方式中,上述图案可以为贴图的形式设置于第一现实场景中,例如通过将预先生成的图案进行打印,并将打印出的图案贴设于第一现实场景。在另一些实施方式中,上述图案也可以为贴图的形式设置于一个实体对象(例如木板等),并将贴设有该贴图的实体对象放置于第一现实场景。当然,具体第一目标物的具体形式可以不作为限定。
在一些实施方式中,第一设备在获取包含第一目标物的图像时,可以获取仅包含第一目标物的图像,以提升构建地图成功的概率。可以理解的,由于仅预先建立有以上图案中的特征信息所对应的地图内容,如果识别到其他的特征点,则无法将地图内容与其他的特征点对应,可能导致构建的地图的过程出现中断,或者构建出的地图中存在非地图区域等。
在一些实施方式中,第一设备还可以获取多张包含第一目标物的图像,多张包含第一目标物的图像可以是在多个拍摄角度,对第一目标物进行图像采集而获得的图像。其中,多个拍摄角度可以是沿第一目标物的中心进行360度的旋转过程中的多个拍摄角度。
另外,在获取包含第一目标物的图像时,还可以对第一现实场景中的环境光的亮度进行检测,如果环境光的亮度低于预设亮度时,还可以进行补光等,以提升建图的效率和质量。
步骤S220:识别所述图像中的所述图案,获得所述图案中的特征信息,所述特征信息与所述第一现实场景对应。
在本申请实施例中,第一设备在获取到包含第一目标物的图像之后,第一设备可以识别图像中的图案,从而获得图案中的特征信息。贴图是预先根据第一现实场景生成的,第一设备识别图像中的图案,获得的图案的特征信息可以与第一现实场景对应。具体的,图案中的特征信息可以包括图案中的多个特征点,图案中的多个特征点可以与第一现实场景中的实体内容相对应,从而后续第一设备根据包含第一目标物的图像,构建的地图可以与第一现实场景相对应。
步骤S230:获取所述第一设备的位姿数据。
在本申请实施例中,由于要实现利用构建的地图,用于第二设备进行重定位,因此,构建的地图中应当携带第一设备的位姿数据。作为一种具体的实施方式,第一设备可以根据采集到的包含第一目标物的图像,以及惯性测量单元(Inertial measurement unit,IMU)采集的数据(运动数据),计算第一设备的位姿数据。其中,位姿数据用于表征第一设备在第一设备所对应的空间坐标系中的位置和姿态,例如位置可以以空间坐标来表示,姿态可以利用旋转角度来表示。当然,第一终端具体获取位姿数据的方式可以不作为限定,例如,第一设备也可以利用与位置和姿态有关的定位模块来实现获取位姿数据。
步骤S240:根据所述特征信息以及所述位姿数据,生成与所述第一现实场景对应的地图,获得地图数据。
在本申请实施例中,第一设备在识别获得第一目标物的图案的特征信息,以及在获得第一设备的位姿数据之后,则可以根据特征信息以及位姿数据,生成与第一现实场景对应的地图。
在一些实施方式中,第一设备可以根据特征信息,确定与第一现实场景对应的地图内容,然后再根据第一设备的位姿数据,确定各个地图内容于第一设备所对应的空间坐标系中的位置,根据各个地图内容及其对应的位置,构建出地图,并得到地图数据。地图数据中可以携带有上述图案中的特征信息、第一设备的位姿数据、地图的内容数据、各个地图内容的位置等,具体携带的数据可以不作为限定。第一设备所对应的空间坐标系可以为以第一设备为原点的世界坐标系,也可以为第一设备的相机坐标系等,在此不做限定。
在一些实施方式中,第一设备可以按照以上方式,建立多个地图,并从多个地图中选择满足指定条件的地图,作为需要传输至第二设备的地图。其中,指定条件可以为地图中的内容与第一现实场景的匹配度高于设定阈值,该阈值可以为90%,也可以为95%等,具体数值可以不作为限定。可以理解的,当地图中的内容与第一现实场景的匹配度越高,则表示构建的地图能与第一现实场景越匹配,构建的地图的质量也越好,通过以上方式可以筛选掉质量不佳的地图,以避免后续传输至第二设备的地图数据,导致重定位失败。
步骤S250:将所述地图数据传输至服务器,所述服务器用于将所述地图数据传输至所述第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第二现实场景与所述第一现实场景不同。
在本申请实施例中,步骤S250可以参阅前述实施例的内容,在此不再赘述。
步骤S260:当获取到指示信息时,控制所述第一设备在所述第一现实场景中叠加虚拟物体,其中,所述指示信息用于指示所述第二设备重定位成功。
在本申请实施例中,第一设备可以对接收到的信息进行监测,当监测到用于指示第二设备重定位成功的指示信息时,则表示第一设备以及第二设备可以显示和操作虚拟物体,并且能够对显示的虚拟物体进行同步。因此,第一设备可以生成并显示虚拟物体。
在一些实施方式中,当获取到用于指示第二设备重定位成功的指示信息时,第一设备还可以生成虚拟的提示内容,并将提示内容进行显示,该提示内容可以用于提示第一设备对应的用户,可以添加或者显示虚拟物体,从而便于用户知晓当前可用进行多人AR。
在一些实施方式中,第一设备可以对用户的操作进行检测,当监测到用于添加虚拟物体的添加操作时,第一设备根据添加操作,将虚拟物体叠加到第一现实场景中。其中,添加操作可以为用户于显示屏上的触控操作,具体可以通过设定的滑动手势、滑动轨迹等触发;添加操作也可以为根据拍摄的手势图像,进行手势识别,在识别出的手势为设定的手势后,确定检测到添加操作,具体的添加操作的形式可以不做限定。第一设备将虚拟物体叠加到第一现实场景中,可以是根据虚拟空间与现实空间的变换关系,将虚拟物体需要叠加到第一现实场景的位置,映射到虚拟空间中,并生成虚拟物体,从而实现在第一现实场景中叠加虚拟物体。
在另一些实施方式中,第一设备也可以在获取到用于指示第二设备重定位成功的指示信息时,将预先设定的虚拟物体叠加至第一现实场景中。例如,在游戏场景中,在第二设备重定位完成之后,第一设备则可以将预先设定的第一设备所对应的虚拟游戏角色叠加到第一现实场景中。在一种实施方式中,第一设备可以为移动终端,例如手机、平板电脑等,虚拟物体可以于移动终端的显示屏上进行显示。该实施方式下,第一设备可以获取第一现实场景的场景图像,例如,通过第一设备上的图像采集装置采集场景图像,第一设备还可以获取虚拟物体需要叠加于第一现实场景的叠加位置,然后第一设备可以根据该叠加位置,确定虚拟物体的像素坐标,最后根据像素坐标,将虚拟物体与场景图像进行合成,得到合成图像。
其中,第一设备根据叠加位置,确定虚拟物体的像素坐标,可以是,第一设备对现实空间中 的空间坐标系与虚拟空间中的空间坐标系进行对齐后,也就是获知两者的转换关系之后,根据叠加位置确定虚拟物体融合到场景图像中的像素坐标。第一设备在将虚拟物体与场景图像进行合成时,可以根据像素坐标,将虚拟物体融合到场景图像中,获得到合成图像中,虚拟物体与场景图像中的实体物体融合到一起,后续进行显示的图像,可以使用户观察到增强现实的显示效果。
在另一种实施方式中,第一设备可以为头戴显示装置,也可以为与外接式头戴显示装置连接的移动终端,也就是说虚拟物体是通过头戴显示装置进行显示。在该实施方式下,第一设备可以获取虚拟物体需要叠加到第一现实场景中的叠加位置,以及虚拟物体的内容数据,生成虚拟物体,实现虚拟物体叠加到第一现实场景。其中,第一设备可以根据该叠加位置,以及现实空间中的空间坐标系与虚拟空间中的空间坐标系的转换关系,将叠加位置转换为虚拟空间中的空间位置,也就获得了虚拟物体在虚拟空间中所需要显示的空间位置。再根据该空间位置以及虚拟物体的内容数据,对虚拟物体进行渲染,从而完成了虚拟物体的生成。
以上两种实施方式中,虚拟物体需要叠加于第一现实场景的叠加位置,可以根据检测到的添加操作进行确定。例如,第一设备为移动终端,虚拟物体需要于移动终端的显示屏上进行显示时,移动终端可以于显示屏上显示场景图像,以便用户根据场景图像确定叠加位置,然后根据用户于屏幕上的触控操作,可以确定虚拟物体的叠加位置。又例如,第一设备为头戴显示装置时,可以检测用户在第一现实场景中的手势动作,根据手势动作的位置,确定虚拟物体的叠加位置。当然,具体确定虚拟物体需要叠加于第一现实场景的叠加位置的方式可以不作为限定,例如,叠加位置也可以是预先设定。
步骤S270:显示所述虚拟物体。
在本申请实施例中,第一设备在将虚拟物体叠加到第一现实场景中之后,则可以将虚拟物体进行显示,以便用户察看到虚拟物体叠加到真实世界的效果,即实现增强现实的显示效果。
在一种实施方式中,当第一设备为移动终端,虚拟物体需要于移动终端的显示屏上进行显示时。第一设备则可以根据将虚拟物体与场景图像进行合成后获得的合成图像,显示于显示屏上,从而实现增强现实的显示效果。
在另一种实施方式中,当第一设备为头戴显示装置,虚拟物体通头戴显示装置进行显示时。在渲染虚拟物体之后,可以获取虚拟物体的画面显示数据,该画面显示数据可以包括显示画面中各个像素点的RGB值以及对应的像素点坐标等,第一设备可以根据画面显示数据生成虚拟画面,并将将生成的虚拟画面通过投射模组投射到显示镜片上,进行显示出虚拟物体,用户可以通过显示镜片,看到虚拟画面叠加显示于第一现实场景中的相应位置处,实现增强现实的显示效果,例如,请参阅图5,在上述举例的打斗类游戏的场景中,在添加虚拟游戏人物A后,用户可以通过显示镜片观察到处于第一现实场景中的虚拟游戏人物A,还可以观察到第一现实场景中处于当前视野范围内的第一目标物11,实现增强现实的显示效果。
步骤S280:获取所述虚拟物体的位姿数据以及显示数据。
在本申请实施例中,第一设备在生成并显示虚拟物体之后,需要第二设备也同步显示该虚拟物体。因此,第一设备还可以根据生成的虚拟物体,确定虚拟物体的位姿数据以及显示数据,其中,位姿数据可以为虚拟物体于虚拟空间中的位置和姿态,显示数据可以为用于渲染该虚拟物体的内容数据,例如顶点坐标、颜色等。
步骤S290:将所述显示数据以及所述位姿数据发送至所述第二设备,所述显示数据以及所述位姿数据用于所述第二设备同步显示所述虚拟物体。
在本申请实施例中,第一设备在获取到生成的虚拟物体的显示数据以及位姿数据之后,则可以将显示数据以及位姿数据发送至第二设备。第二设备可以根据第一终端发送的位姿数据,确定出该虚拟物体在其对应的空间坐标系中的位置,并根据显示数据对该虚拟物体进行渲染后,将该虚拟物体进行显示,实现第一设备生成和显示的虚拟物体同步到第二设备进行显示。
在一些实施方式中,第一设备也可以接收第二设备传输的目标虚拟物体的显示数据及位姿数据,该目标虚拟物体为第二设备在重定位后,生成和显示的虚拟物体,目标物体的位姿数据可以为第二设备经过对齐后的空间坐标系,换算到第一设备对应的空间坐标系中获得,从而第一设备可以直接根据目标虚拟物的位姿数据,确定目标虚拟物的显示位置,并生成目标虚拟物,将目标虚拟物进行显示,使第一设备生成的虚拟物以及第二设备生成的虚拟物同步进行显示。例如,请同时参阅图5及图6,在上述举例的打斗类游戏的场景中,第一设备在显示虚拟游戏人物A之后,接收到第二设备生成和显示的虚拟游戏人物B的显示数据和位姿数据,生成虚拟游戏人物B,并将虚拟游戏人物B进行显示,实现第二设备添加的虚拟游戏人物B同步到第一设备进行显示,并且可以控制虚拟游戏人物A与虚拟游戏人物B进行打斗等动作交互,实现打斗类游戏的交互。
在一些实施方式中,第一设备还可以检测对添加的虚拟物体的操作数据,并对操作数据进行响应,并将响应结果同步到第二设备。例如,在移动添加的虚拟物体后,可以将新的位姿数据同步到第二设备。
本申请实施例提供的数据处理方法,通过获取包含第一目标物的图像,该第一目标物处于第一设备所处的第一现实场景中,且第一目标物上包括预先根据第一现实场景生成的图案,通过识别该图案中的特征信息,并获取第一设备的位姿数据,根据特征信息和位姿数据构建地图,获得地图数据,然后将地图数据发送至第二设备,该地图数据用于指示第二设备根据地图数据以及第二目标物进行重定位,且第一目标物与第二目标物具有至少两个相同的特征点,第二目标物处于第二设备所处的第二现实场景中,第一现实场景与第二现实场景不同。从而可以根据不同现实场景中具有对应关系的目标物,实现重定位,进而便于了多人AR方案的实现,提升了用户体验。另外,当获取到用于指示第二设备重定位成功的指示信息时,生成虚拟物体并进行显示,然后将虚拟物体的位姿数据以及显示数据传输至第二设备,以便第二设备对该虚拟物体进行同步显示,实现了多人AR中的同步显示。
请参阅图7,图7示出了本申请又一个实施例提供的数据处理方法的流程示意图。该数据处理方法应用于上述数据处理系统中的第二设备,数据处理系统还包括第一设备。下面将针对图7所示的流程进行详细的阐述,所述数据处理方法具体可以包括以下步骤:
步骤S310:获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景。
在一些实施方式中,第一设备与第二设备之间通过服务器进行数据传输时,第一设备传输的地图数据可以传输至服务器,再由第二设备接收服务器传输的地图数据。因此,第一设备与第二设备相距较远时,可以实现多人AR中的远程交互方案。
步骤S320:获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有至少两个相同的特征点,所述第二现实场景与所述第一现实场景不同。
在本申请实施例中,第二设备在接收到地图数据之后,在进行重定位时,可以获取包含其所处的第二现实场景中的第二目标物的图像,以便后续根据地图数据和包含第二目标物的图像,进行重定位。
在一些实施方式中,第一目标物与第二目标物具有至少两个相同的特征点。例如,第一目标物可以与第二目标物相同,也就是所有特征点是相同的,第一设备可以通过第一目标物识别得到的特征点,并根据特征点构建地图,第二设备可以通过第二目标物识别的特征点,识别出地图,并且第一设备构建的地图内容与第二设备识别出的地图内容相同,从而第二设备后续可以根据第二目标物识别出的地图,以及第一设备构建的地图,进行重定位。又例如,第一目标物与第二目标物具有部分相同的特征点,且部分相同的特征点的数量为至少两个,第一目标物的其他不同的特征点对应的地图内容可以与第二目标物的其他不同的特征点对应的地图内容对应,因此,第一设备根据第一目标物构建出的地图内容与第二设备根据第二目标物识别出的地图内容之间,具有部分相同的地图内容,而其他不同的地图内容之间相互对应,第二设备根据相同的地图内容以及相互对应的地图内容,也可以实现重定位。
作为一种可选的实施方式,第一目标物以及第二目标物可以均是贴图,并且贴图上的图案为预先根据第一设备所处的第一现实场景生成,以方便利用地图数据进行重定位。
在一些实施方式中,第二设备在获取包含第二目标物的图像时,也可以获取仅包含第二目标物的图像,以提升重定位的成功概率。另外,在获取包含第二目标物的图像时,第二设备也可以对第二现实场景中的环境光的亮度进行检测,如果环境光的亮度低于预设亮度时,还可以利用补光模块(例如补光灯等)进行补光等,以提升建图的效率和质量。
步骤S330:根据所述地图数据以及所述包含第二目标物的图像进行重定位。
在本申请实施例中,第二设备在获取到地图数据,以及包含第二目标物的图像之后,则可以根据地图数据以及包含第二目标物的图像进行重定位。
在一些实施方式中,第二设备可以识别第二目标物中的特征信息,例如识别贴图的图案中的特征信息。第二设备还可以根据地图数据,确定第一目标物中的特征信息,并将根据第一目标物的特征信息,以及第二目标物的特征信息,进行第一目标物与第二目标物的匹配,如果两者的相似度大于设定相似度时,再根据地图数据以及包含第二目标物的图像,确定第一设备与第二设备之间的位置关系,并将第一设备对应的空间坐标系与第二设备对应的空间坐标系的对齐,从而完 成重定位,后续第一设备以及第二设备可以根据对齐后的空间坐标系,进行虚拟内容的显示和交互。
在一种具体的实施方式中,第二设备可以根据包含第二目标物的图像,识别出特征信息,并确定与特征信息对应的地图内容,然后获取第二设备的位姿数据,再根据第二设备的位姿数据,确定各个地图内容于第二设备所对应的空间坐标系中的位置,根据各个地图内容及其对应的位置,确定出当前识别到的地图。由于第一目标物以及第二目标物为相同,或者其对应的地图内容是对应的,因此第二设备可以根据第一设备构建的地图,以及第二目标物识别到的地图,将两者进行匹配,可以分析出第一设备与第二设备之间的位置关系,进而根据该位置关系,可以确定第一设备对应的空间坐标系与第二设备对应的空间坐标系之间的转换关系,从而实现了坐标系的对齐,也就完成了重定位。在重定位完成后,第一设备以及第二设备则可以利用根据对齐后的空间坐标系,实现虚拟内容的同步。第一设备对应的空间坐标系可以为以第一设备为原点的世界坐标系,也可以为第一设备的相机坐标系,在此不做限定。同样的,第二设备对应的空间坐标系可以为以第二设备为原点的世界坐标系,也可以为第二设备的相机坐标系,在此不做限定。
本申请实施例提供的数据处理方法,第二设备通过获取传输的地图数据,且该地图数据为第一设备根据获取的包含第一目标物的图像,构建与第一设备所处的第一现实场景对应的地图获得,第一目标物处于第一现实场景,然后获取包含第二目标物的图像,第二目标物处于第二设备所处的第二现实场景中,第一目标物与第二目标物具有对应关系,再根据地图数据以及包含第二目标物的图像进行重定位,从而可以根据不同现实场景中的目标物,实现重定位,进而便于了多人AR方案的实现,提升了用户体验。
请参阅图8,图8示出了本申请又一个实施例提供的数据处理方法的流程示意图。该数据处理方法应用于上述数据处理系统中的第二设备,数据处理系统还包括第一设备。下面将针对图8所示的流程进行详细的阐述,所述数据处理方法具体可以包括以下步骤:
步骤S410:获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景。
步骤S420:获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二现实场景与所述第一现实场景不同。
在本申请实施例中,步骤S410以及步骤S420可以参阅前述实施例的内容,在此不再赘述。
步骤S430:根据所述包含第二目标物的图像以及所述地图数据,确定所述第二终端于第一空间坐标系中的第一位姿数据,所述第一空间坐标系为所述第一设备对应的空间坐标系。
在一些实施方式中,第二设备可以根据包含第二目标物的图像,确定图像中的特征信息对应的各个地图内容,以及第二设备的位姿数据,确定各个地图内容于第二设备所对应的空间坐标系中的位置,根据各个地图内容及其对应的位置,确定出当前识别到的地图。第二设备可以根据识别到的地图,以及接收到的地图进行特征匹配,分别提取两者相同的图像特征,输出点匹配集合,进而估计第二设备相对于第一设备对应的第一空间坐标系的位姿数据,其中位姿估计算法可以是PNP算法,具体的位姿估计算法可以不做限定。第一设备对应的空间坐标系可以为以第一设备为原点的世界坐标系,也可以为第一设备的相机坐标系,在此不做限定。
在一种实施方式中,当第一目标物的特征点与第二目标物的特征点全部相同时,则可以直接将识别出的地图与接收到的地图进行特征匹配,从而后续根据匹配结果确定第二设备在第一空间坐标系的位姿数据。
在另一种实施方式中,当第一目标物的特征点与第二目标物的特征点部分相同时,而第一目标物与第二目标物之间的不同特征点对应的地图内容是相对应的。则可以将识别出的地图内容中,部分地图内容与接收的地图内容进行匹配,而其余地图内容则与第一目标物的不同特征点所对应的地图内容进行匹配,从而后续根据匹配结果确定第二设备在第一空间坐标系的位姿数据。
步骤S440:获取所述第二终端于第二空间坐标系中的第二位姿数据,所述第二空间坐标系为所述第二设备对应的空间坐标系。
其中,第二终端可以根据获取的包含第二目标物的图像,以及IMU采集到的数据,确定第二终端于第二空间坐标系中的第二位姿数据。第二设备对应的空间坐标系可以为以第二设备为原点的世界坐标系,也可以为第二设备的相机坐标系,在此不做限定。
步骤S450:根据所述第一位姿数据以及所述第二位姿数据,获取所述第一空间坐标系与所述第二空间坐标系之间的坐标系转换关系。
在一些实施方式中,第二设备可以根据第一位姿数据以及第二位姿数据,获得第一空间坐标系与第二空间坐标系之间的坐标系变换数据,例如获得坐标系变换矩阵等,并将坐标系变换数据作为第一空间坐标系与第二空间坐标系之间的坐标系转换关系。后续则可以利用该坐标系变换数据,实现第一设备生成的虚拟物体的位姿数据的转换,以同步显示第一设备生成和显示的虚拟物体。
在本申请实施例中,第二设备在完成重定位之后,还可以生成指示信息,并将该指示信息传输至第一设备,指示信息用于指示第二设备重定位成功,以便第一设备的用户可以获知可以添加虚拟物体,或者第一设备可以生成预先设定的虚拟物体。
步骤S460:获取所述第一设备显示的虚拟物体的显示数据以及第三位姿数据。
可以理解的,第一设备在发送其生成和显示的虚拟物体的显示数据以及第三位姿数据之后,第二设备可以对应接收到该显示数据和第三位姿数据。
步骤S470:根据所述坐标系转换关系,将所述第三位姿数据转换为所述第二空间坐标系中的第四位姿数据。
在一些实施方式中,第二设备可以根据以上获取的坐标系转换关系,将第三位姿数据转换为第二空间坐标系中的第四位姿数据,也就实现了虚拟物体的位姿数据从第一设备对应的第一空间坐标系,到第二设备对应的第二空间坐标系的转换。
步骤S480:根据所述显示数据以及所述第四位姿数据,在所述第二现实场景中叠加所述虚拟物体。
在一些实施方式中,第二设备则可以根据转换后获得的第四位姿数据,确定虚拟物体在第二空间坐标系中的位置,并根据该位置和显示数据生成该虚拟物体,以将虚拟物体叠加至第二现实场景中。具体将虚拟物体叠加至第二现实场景的方式,可以参阅前述实施例中第一设备将虚拟物体叠加至第一现实场景的方式,在此不再赘述。
在一些实施方式中,第二设备在重定位后,也可以根据用户的添加操作或者按照预设的规则,生成和显示目标虚拟物体。例如,请参阅图9,在添加虚拟人物B后,第二设备对应的用户可以通过显示镜片观察到处于第一现实场景中的虚拟人物B,还可以观察到第二现实场景中处于当前视野范围内的第二目标物11,实现增强现实的显示效果。另外,如果第二设备在生成和显示目标虚拟物体之后,如果此时已经接收到第一设备发生的虚拟物体和显示数据,则可以同步显示第一设备生成和显示的虚拟物体。
例如,请同时参阅图9及图10,第二设备在显示虚拟游戏人物B之后,接收到第一设备生成和显示的虚拟游戏人物A的显示数据和位姿数据,生成虚拟游戏人物A,并将虚拟游戏人物A进行显示,实现第一设备添加的虚拟游戏人物A同步到第二设备进行显示,并且可以控制虚拟游戏人物B与虚拟游戏人物A进行打斗等动作交互,实现打斗类游戏的交互。
在以上打斗类游戏的场景中,可以看出虚拟物体(例如以上虚拟人物等)可以叠加至现实场景中第一目标物及第二目标物所在区域以外的位置处,使用户察看到虚拟物体叠加至第一目标物及第二目标物所在区域以外的显示效果。
在一些场景中,虚拟物体也可以叠加于第一目标物以及第二目标物所在的位置区域。当通过移动终端的显示屏显示虚拟物体时,则显示屏上显示的内容可以为虚拟物体叠加到第一目标物的图像,或者虚拟物体叠加到第二目标物的图像。
例如,在棋类游戏的场景中,移动终端的显示屏上可以仅显示叠加到第一目标物或者第二目标物上的棋盘、棋子等内容,以较好地实现游戏显示效果。具体的,第一设备与第二设备均为移动终端,在第一设备根据第一目标物构建地图,第二设备根据第二目标物以及构建的地图进行重定位后,第一设备可以显示叠加于第一目标物的棋盘,第二设备也可以同步显示叠加于第二目标物的棋盘,当第一设备添加有“黑色棋子”到棋盘后,第一设备可以将添加的“黑色棋子”的显示数据和位姿数据发送至第二设备,第二设备根据显示数据和位姿数据,在棋盘上相同的位置处显示该“黑色棋子”。同理,当第二设备添加有“白色棋子”到棋盘后,第二设备可以将添加的“白色棋子”的显示数据和位姿数据发送至第一设备,第一设备根据“白色棋子”的显示数据和位姿数据,在棋盘上相同的位置处显示该“白色棋子”。另外,当第一设备显示有游戏提示内容时,第二设备也可以同步显示游戏提示内容。
同样的,第二设备可以将目标虚拟物体的显示数据和位姿数据发送至第一设备,从而实现第二设备生成和显示的虚拟物体同步到第一设备进行显示。需要说明的是,由于第二设备显示的目标虚拟物体是根据第二空间坐标系中的位置生成,因此可以根据坐标系转换关系,将目标虚拟物体的位姿数据转换为第一空间坐标系中的位姿数据后,发送至第一设备,以便第一设备可以根据 第二设备发送的位姿数据生成目标虚拟物体。
在一些实施方式中,第二设备还可以检测对添加的虚拟物体的操作数据,并对操作数据进行响应,并将响应结果同步到第一设备。例如,在移动添加的虚拟物体后,可以将新的位姿数据同步到第一设备。又例如,请参阅图11,在上述举例的打斗类游戏的场景中,可以将虚拟人物进行切换,以丰富打斗类型游戏的玩法。
本申请实施例提供的数据处理方法,通过获取传输的地图数据,且该地图数据为第一设备根据获取的包含第一目标物的图像,构建与第一设备所处的第一现实场景对应的地图获得,第一目标物处于第一现实场景,然后获取包含第二目标物的图像,第二目标物处于第二设备所处的第二现实场景中,第一目标物与第二目标物具有对应关系,然后根据包含第二目标物的图像以及地图数据,确定第二设备在第一设备对应的第一空间坐标系中的第一位姿数据,再获取第二设备在其对应的第二空间坐标系中的第二位姿数据,再根据第一位姿数据以及第二位姿数据,确定第一空间坐标系与第二空间坐标系之间的坐标系转换关系,从而实现重定位。在获得到坐标系转换关系之后,则可以根据坐标系转换关系,进行虚拟物体的同步,进而实现多人AR中的同步显示和交互。
请参阅图12,图12示出了本申请又另一个实施例提供的数据处理方法的流程示意图。该数据处理方法应用于数据处理系统,数据处理系统包括第一设备、第二设备以及服务器,第一设备以及第二设备均与该服务器通信连接。下面将针对图12所示的流程进行详细的阐述,所述数据处理方法具体可以包括以下步骤:
步骤S501:构建地图。
在本申请实施例中,第一设备构建地图的方式,可以参阅前述实施例的内容,在此不再赘述。
步骤S502:上传地图至服务器。
在本申请实施例中,第一设备在构建地图之后,则可以上传地图至服务器。
步骤S503:传输地图数据至第二设备。
在本申请实施例中,服务器在接收到第一设备上传的地图之后,可以将地图数据传输至第二设备。
步骤S504:进行重定位。
在本申请实施例中,第二设备可以根据地图数据以及包含第二目标物的图像,进行重定位,具体重定位方式可以参阅前述实施例的内容,在此不再赘述。
步骤S505:重定位成功。
步骤S506:发送指示信息至服务器。
在本申请实施例中,该指示信息用于指示第二设备重定位成功。
步骤S507:传输指示信息至第一设备。
在本申请实施例中,服务器接收到指示信息之后,则可以将该指示信息传输至第一设备,以便第一设备获知第二设备重定位成功。
步骤S508:接收到指示信息。
相应的,第一设备可以接收到该指示信息。
步骤S509:添加虚拟物体。
在本申请实施例中,在第二设备重定位成功,第一设备获知第二设备重定位成功后,则可以第一设备以及第二设备均可以各自添加虚拟物体至虚拟场景中。
步骤S510:传输添加的虚拟物体。
在本申请实施例中,第一设备在添加虚拟物体后,可以将添加的虚拟物体的显示数据以及位姿数据传输至服务器,服务器可以将第一设备添加的虚拟物体的显示数据以及位姿数据转发至第二设备,以便第二设备同步显示第一设备添加的虚拟物体。同理,第二设备也在添加虚拟物体后,可以将添加的虚拟物体的显示数据以及位姿数据传输至服务器,服务器可以将第二设备添加的虚拟物体的显示数据以及位姿数据转发至第一设备,以便第一设备同步显示第二设备添加的虚拟物体。从而在互相同步显示添加的虚拟物体之后,则可以实现虚拟物体的同步显示,并且后续可以根据显示的虚拟物体进行交互。
步骤S511:操作虚拟物体。
在本申请实施例中,第一设备可以根据用户的操作,对第一设备对应的虚拟物体进行操作。同样的,第二设备可以根据用户的操作,对第二设备对应的虚拟物体进行操作。
步骤S512:传输虚拟物体的当前位姿数据。
在本申请实施例中,在对虚拟物体进行操作后,可能导致虚拟物体的位姿的变化,因此可以传输虚拟物体的当前位姿数据,以便同步虚拟物体的位姿变化。
请参阅图13,其示出了本申请一个实施例提供的一种数据处理装置400的结构框图。该数据处理装置400应用上述的数据处理系统中的第一设备,该数据处理系统还包括第二设备。该数据处理装置400包括:第一图像获取模块410、地图构建模块420以及地图传输模块430。其中,所述第一图像获取模块410用于获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中;所述地图构建模块420用于根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据;所述地图传输模块430用于将所述地图数据传输至所述第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第二现实场景与所述第一现实场景不同。
在一些实施方式中,该数据处理装置400还可以包括:内容叠加模块,用于在所述将所述地图发送至所述第二设备之后,当接收到所述第二设备发送的指示信息时,控制所述第一设备在所述第一现实场景中叠加虚拟物体,其中,所述指示信息用于指示所述第二设备重定位成功;内容显示模块,用于显示所述虚拟物体。
在该实施方式下,内容叠加模块可以具体用于:获取所述第一现实场景的场景图像;获取虚拟物体需要叠加于所述第一现实场景的叠加位置;根据所述叠加位置,确定所述虚拟物体的像素坐标;根据所述像素坐标,将所述虚拟物体与所述场景图像合成,得到合成图像。内容显示模块可以具体用于:将所述合成图像进行显示。
在该实施方式下,内容叠加模块可以具体用于:获取虚拟物体需要叠加于所述第一现实场景的叠加位置;根据所述叠加位置以及所述虚拟物体的内容数据,生成所述虚拟物体。
进一步的,内容生成模块获取虚拟物体需要叠加于所述第一现实场景的叠加位置可以包括:根据检测到的添加操作,虚拟物体需要叠加于所述第一现实场景的叠加位置。
在一些实施方式中,该数据处理装置400还可以包括:数据获取模块,用于在所述生成并显示虚拟物体之后,获取所述虚拟物体的位姿数据以及显示数据;数据发送模块,用于将所述显示数据以及所述位姿数据发送至所述第二设备,所述显示数据以及所述位姿数据用于所述第二设备同步显示所述虚拟物体。
在一些实施方式中,地图构建模块420可以包括:图案识别单元,用于识别所述图像中的所述图案,获得所述图案中的特征信息,所述特征信息与所述第一现实场景对应;数据获取单元,用于获取所述第一设备的位姿数据;地图生成单元,用于根据所述特征信息以及所述位姿数据,生成与所述第一现实场景对应的地图,获得地图数据。
在一些实施方式中,地图传输模块可以具体用于:将所述地图数据传输至服务器,所述服务器用于将所述地图数据传输至所述第二设备。
请参阅图14,其示出了本申请另一个实施例提供的一种数据处理装置500的结构框图。该数据处理装置500应用上述的数据处理系统中的第二设备,该数据处理系统还包括第一设备。该数据处理装置500包括:地图获取模块510、第二图像获取模块520以及重定位模块530。其中,所述地图获取模块510用于获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景;所述第二图像获取模块520用于获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二现实场景与所述第一现实场景不同;所述重定位模块530用于根据所述地图数据以及所述图像进行重定位。
在一些实施方式中,重定位模块530可以包括:第一位姿确定单元,用于根据所述包含第二目标物的图像以及所述地图数据,确定所述第二终端于第一空间坐标系中的第一位姿数据,所述第一空间坐标系为所述第一设备对应的空间坐标系;第二位姿确定单元,用于获取所述第二终端于第二空间坐标系中的第二位姿数据,所述第二空间坐标系为所述第二设备对应的空间坐标系;关系确定单元,用于根据所述第一位姿数据以及所述第二位姿数据,获取所述第一空间坐标系与所述第二空间坐标系之间的坐标系转换关系。
在一些实施方式中,该数据处理装置500还可以包括:位姿数据获取模块,用于获取所述第一设备显示的虚拟物体的显示数据以及第三位姿数据;位姿数据转换模块,用于根据所述坐标系转换关系,将所述第三位姿数据转换为所述第二空间坐标系中的第四位姿数据;虚拟物体叠加模块,用于根据所述显示数据以及所述第四位姿数据,在所述第二现实场景中叠加所述虚拟物体。
在一些实施方式中,该数据处理装置500还可以包括:指示信息生成模块。指示信息生成模块用于在所述根据所述地图数据以及所述图像进行重定位之后,成指示信息,并将所述指示信息 传输至所述第一设备,所述指示信息用于指示所述第二设备重定位成功。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述装置和模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,模块相互之间的耦合可以是电性,机械或其它形式的耦合。
另外,在本申请各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
本申请实施例还提供了一种数据处理系统,该数据处理系统包括第一设备以及第二设备。第一设备与第二设备可进行数据传输。其中,第一设备用于:获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中;根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据;将所述地图数据传输至所述第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有对应关系,所述第二目标物处于所述第二设备所处的第二现实场景中。第二设备用于:获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景;获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有对应关系;根据所述地图数据以及所述包含第二目标物的图像进行重定位。
综上所述,本申请提供的方案,通过获取包含第一目标物的图像,该第一目标物处于第一设备所处的第一现实场景中,根据该图像,构建与第一现实场景对应的地图,获得地图数据,然后将地图数据发送至第二设备,该地图数据用于指示第二设备根据地图数据以及第二目标物进行重定位,且第一目标物与第二目标物具有至少两个以上相同的特征点,第二目标物处于第二设备所处的第二现实场景中,且第二显示场景与第一现实场景不同,从而可以根据不同现实场景中具有对应关系的目标物,实现重定位,方便增强现实中多人AR方案的实现,提升用户体验。
请参考图15,其示出了本申请实施例提供的一种电子设备的结构框图。该电子设备100可以是智能手机、平板电脑、智能手表、头戴显示装置等能够运行应用程序的电子设备。本申请中的电子设备100可以包括一个或多个如下部件:处理器110、存储器120、以及一个或多个应用程序,其中一个或多个应用程序可以被存储在存储器120中并被配置为由一个或多个处理器110执行,一个或多个程序配置用于执行如前述方法实施例所描述的方法。
处理器110可以包括一个或者多个处理核。处理器110利用各种接口和线路连接整个电子设备100内的各个部分,通过运行或执行存储在存储器120内的指令、程序、代码集或指令集,以及调用存储在存储器120内的数据,执行电子设备100的各种功能和处理数据。可选地,处理器110可以采用数字信号处理(Digital Signal Processing,DSP)、现场可编程门阵列(Field-Programmable Gate Array,FPGA)、可编程逻辑阵列(Programmable Logic Array,PLA)中的至少一种硬件形式来实现。处理器110可集成中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。可以理解的是,上述调制解调器也可以不集成到处理器110中,单独通过一块通信芯片进行实现。
存储器120可以包括随机存储器(Random Access Memory,RAM),也可以包括只读存储器(Read-Only Memory)。存储器120可用于存储指令、程序、代码、代码集或指令集。存储器120可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令(比如触控功能、声音播放功能、图像播放功能等)、用于实现下述各个方法实施例的指令等。存储数据区还可以存储电子设备100在使用中所创建的数据(比如电话本、音视频数据、聊天记录数据)等。
请参考图16,其示出了本申请实施例提供的一种计算机可读存储介质的结构框图。该计算机可读介质800中存储有程序代码,所述程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质800可以是诸如闪存、EEPROM(电可擦除可编程只读存储器)、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质800包括非易失性计算机可读介质(non-transitory computer-readable storage medium)。计算机可读存储介质800具有执行上述方法中的任何方法步骤的程序代码810的存储空间。这些程序代码可以从一个或者多个计算机程序 产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码810可以例如以适当形式进行压缩。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不驱使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。

Claims (20)

  1. 一种数据处理方法,其特征在于,应用于第一设备,所述方法包括:
    获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中;
    根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据;
    将所述地图数据传输至第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第二现实场景与所述第一现实场景不同。
  2. 根据权利要求1所述的方法,其特征在于,在所述将所述地图发送至所述第二设备之后,所述方法还包括:
    当接收到所述第二设备发送的指示信息时,控制所述第一设备在所述第一现实场景中叠加虚拟物体,其中,所述指示信息用于指示所述第二设备重定位成功;
    显示所述虚拟物体。
  3. 根据权利要求2所述的方法,其特征在于,所述控制所述第一设备在所述第一现实场景中叠加虚拟物体,包括:
    获取所述第一现实场景的场景图像;
    获取虚拟物体需要叠加于所述第一现实场景的叠加位置;
    根据所述叠加位置,确定所述虚拟物体的像素坐标;
    根据所述像素坐标,将所述虚拟物体与所述场景图像合成,得到合成图像;
    所述显示虚拟物体,包括:
    将所述合成图像进行显示。
  4. 根据权利要求2所述的方法,其特征在于,所述控制所述第一设备在所述第一现实场景中叠加虚拟物体,包括:
    获取虚拟物体需要叠加于所述第一现实场景的叠加位置;
    根据所述叠加位置以及所述虚拟物体的内容数据,生成所述虚拟物体。
  5. 根据权利要求3或4所述的方法,其特征在于,所述获取虚拟物体需要叠加于所述第一现实场景的叠加位置,包括:
    根据检测到的添加操作,获取虚拟物体需要叠加于所述第一现实场景的叠加位置。
  6. 根据权利要求5所述的方法,其特征在于,检测所述添加操作,包括:
    获取拍摄的手势图像;
    根据所述手势图像进行手势识别;
    在识别出的手势为设定的手势时,确定检测到所述添加操作。
  7. 根据权利要求2所述的方法,其特征在于,在所述生成并显示虚拟物体之后,所述方法还包括:
    获取所述虚拟物体的位姿数据以及显示数据;
    将所述显示数据以及所述位姿数据发送至所述第二设备,所述显示数据以及所述位姿数据用于所述第二设备同步显示所述虚拟物体。
  8. 根据权利要求1-7任一项所述的方法,其特征在于,所述第一目标物上包括预先根据所述第一现实场景生成的图案,所述根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据,包括:
    识别所述图像中的所述图案,获得所述图案中的特征信息,所述特征信息与所述第一现实场景对应;
    获取所述第一设备的位姿数据;
    根据所述特征信息以及所述位姿数据,生成与所述第一现实场景对应的地图,获得地图数据。
  9. 根据权利要求8所述的方法,其特征在于,所述位姿数据包括所述第一设备在其所对应的空间坐标系中的位置和姿态,所述根据所述特征信息以及所述位姿数据,生成与所述第一现实场景对应的地图,获得地图数据,包括:
    根据所述特征信息,确定与所述第一现实场景对应的地图内容;
    根据所述位置和姿态,确定所述地图内容于所述第一设备所对应的空间坐标系中的位置;
    根据所述地图内容以及所述位置,构建与所述第一现实场景对应的地图,获得地图数据。
  10. 根据权利要求1-9任一项所述的方法,其特征在于,所述根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据,包括:
    根据所述特征信息以及所述位姿数据,构建多个地图;
    从所述多个地图中获取满足指定条件的地图,作为与所述第一现实场景对应的地图,并获得地图数据。
  11. 根据权利要求10所述的方法,其特征在于,所述指定条件包括:
    地图中的内容与所述第一现实场景的匹配度高于设定阈值。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述将所述地图数据传输至所述第二设备,包括:
    将所述地图数据传输至服务器,所述服务器用于将所述地图数据传输至所述第二设备。
  13. 一种数据处理方法,其特征在于,应用于第二设备,所述方法包括:
    获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景;
    获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二现实场景与所述第一现实场景不同;
    根据所述地图数据以及所述包含第二目标物的图像进行重定位。
  14. 根据权利要求13所述的方法,其特征在于,所述根据所述地图数据以及所述包含第二目标物的图像进行重定位,包括:
    根据所述包含第二目标物的图像以及所述地图数据,确定所述第二设备于第一空间坐标系中的第一位姿数据,所述第一空间坐标系为所述第一设备对应的空间坐标系;
    获取所述第二设备于第二空间坐标系中的第二位姿数据,所述第二空间坐标系为所述第二设备对应的空间坐标系;
    根据所述第一位姿数据以及所述第二位姿数据,获取所述第一空间坐标系与所述第二空间坐标系之间的坐标系转换关系。
  15. 根据权利要求14所述的方法,其特征在于,所述方法还包括:
    获取所述第一设备显示的虚拟物体的显示数据以及第三位姿数据;
    根据所述坐标系转换关系,将所述第三位姿数据转换为所述第二空间坐标系中的第四位姿数据;
    根据所述显示数据以及所述第四位姿数据,在所述第二现实场景中叠加所述虚拟物体。
  16. 根据权利要求13所述的方法,其特征在于,在所述根据所述地图数据以及所述图像进行重定位之后,所述方法还包括:
    生成指示信息,并将所述指示信息传输至所述第一设备,所述指示信息用于指示所述第二设备重定位成功。
  17. 一种数据处理装置,其特征在于,应用于第一设备,所述装置包括:第一图像获取模块、地图构建模块以及地图传输模块,其中,
    所述第一图像获取模块用于获取包含第一目标物的图像,所述第一目标物处于所述第一设备所处的第一现实场景中;
    所述地图构建模块用于根据所述图像,构建与所述第一现实场景对应的地图,获得地图数据;
    所述地图传输模块用于将所述地图数据传输至所述第二设备,所述地图数据用于指示所述第二设备根据所述地图数据以及第二目标物进行重定位,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第二现实场景与所述第一现实场景不同。
  18. 一种数据处理装置,其特征在于,应用于第二设备,所述装置包括:地图获取模块、第二图像获取模块以及重定位模块,其中,
    所述地图获取模块用于获取所述第一设备传输的地图数据,所述地图数据为所述第一设备根据获取的包含第一目标物的图像,构建与所述第一设备所处的第一现实场景对应的地图获得,所述第一目标物处于所述第一现实场景;
    所述第二图像获取模块用于获取包含第二目标物的图像,所述第二目标物处于所述第二设备所处的第二现实场景中,所述第一目标物与所述第二目标物具有至少两个以上相同的特征点,所 述第二现实场景与所述第一现实场景不同;
    所述重定位模块用于根据所述地图数据以及所述图像进行重定位。
  19. 一种电子设备,其特征在于,包括:
    一个或多个处理器;
    存储器;
    一个或多个应用程序,其中所述一个或多个应用程序被存储在所述存储器中并被配置为由所述一个或多个处理器执行,所述一个或多个程序配置用于执行如权利要求1-12任一项所述的方法、或者执行如权利要求13-16任一项所述的方法。
  20. 一种计算机可读取存储介质,其特征在于,所述计算机可读取存储介质中存储有程序代码,所述程序代码可被处理器调用执行如权利要求1-12任一项所述的方法、或者执行如权利要求13-16任一项所述的方法。
PCT/CN2020/128443 2019-11-27 2020-11-12 数据处理方法、装置、电子设备及存储介质 WO2021104037A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP20891862.3A EP4057109A4 (en) 2019-11-27 2020-11-12 DATA PROCESSING METHOD AND DEVICE, AS WELL AS ELECTRONIC DEVICE AND STORAGE MEDIA
US17/723,319 US20220245859A1 (en) 2019-11-27 2022-04-18 Data processing method and electronic device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911184981.9A CN111078003B (zh) 2019-11-27 2019-11-27 数据处理方法、装置、电子设备及存储介质
CN201911184981.9 2019-11-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/723,319 Continuation US20220245859A1 (en) 2019-11-27 2022-04-18 Data processing method and electronic device

Publications (1)

Publication Number Publication Date
WO2021104037A1 true WO2021104037A1 (zh) 2021-06-03

Family

ID=70311949

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/128443 WO2021104037A1 (zh) 2019-11-27 2020-11-12 数据处理方法、装置、电子设备及存储介质

Country Status (4)

Country Link
US (1) US20220245859A1 (zh)
EP (1) EP4057109A4 (zh)
CN (1) CN111078003B (zh)
WO (1) WO2021104037A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113377205A (zh) * 2021-07-06 2021-09-10 浙江商汤科技开发有限公司 场景显示方法及装置、设备、车辆、计算机可读存储介质

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111078003B (zh) * 2019-11-27 2021-10-22 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备及存储介质
CN112150634B (zh) * 2020-08-31 2024-03-26 浙江工业大学 基于多人重定向的大规模虚拟场景漫游方法
CN112967404A (zh) * 2021-02-24 2021-06-15 深圳市慧鲤科技有限公司 控制虚拟对象移动的方法、装置、电子设备及存储介质
CN112950711A (zh) * 2021-02-25 2021-06-11 深圳市慧鲤科技有限公司 一种对象的控制方法、装置、电子设备及存储介质
CN115129163B (zh) * 2022-08-30 2022-11-11 环球数科集团有限公司 一种虚拟人行为交互系统
CN117671203A (zh) * 2022-08-31 2024-03-08 华为技术有限公司 一种虚拟数字内容显示系统、方法与电子设备
CN116155694B (zh) * 2023-04-04 2023-07-04 深圳中正信息科技有限公司 物联设备的管理方法、设备和可读存储介质
CN116258794B (zh) * 2023-05-10 2023-07-28 广州海洋地质调查局三亚南海地质研究所 一种地震剖面数字化方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160018657A1 (en) * 2013-01-13 2016-01-21 Qualcomm Incorporated Optics display system with dynamic zone plate capability
CN108734736A (zh) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 相机姿态追踪方法、装置、设备及存储介质
CN109035415A (zh) * 2018-07-03 2018-12-18 百度在线网络技术(北京)有限公司 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN109087359A (zh) * 2018-08-30 2018-12-25 网易(杭州)网络有限公司 位姿确定方法、位姿确定装置、介质和计算设备
CN111078003A (zh) * 2019-11-27 2020-04-28 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备及存储介质

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106920079B (zh) * 2016-12-13 2020-06-30 阿里巴巴集团控股有限公司 基于增强现实的虚拟对象分配方法及装置
CN108520552A (zh) * 2018-03-26 2018-09-11 广东欧珀移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN109949422B (zh) * 2018-10-15 2020-12-15 华为技术有限公司 用于虚拟场景的数据处理方法以及设备
CN110286768B (zh) * 2019-06-27 2022-05-17 Oppo广东移动通信有限公司 虚拟物体显示方法、终端设备及计算机可读存储介质
CN110457414B (zh) * 2019-07-30 2023-06-09 Oppo广东移动通信有限公司 离线地图处理、虚拟对象显示方法、装置、介质和设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160018657A1 (en) * 2013-01-13 2016-01-21 Qualcomm Incorporated Optics display system with dynamic zone plate capability
CN108734736A (zh) * 2018-05-22 2018-11-02 腾讯科技(深圳)有限公司 相机姿态追踪方法、装置、设备及存储介质
CN109035415A (zh) * 2018-07-03 2018-12-18 百度在线网络技术(北京)有限公司 虚拟模型的处理方法、装置、设备和计算机可读存储介质
CN109087359A (zh) * 2018-08-30 2018-12-25 网易(杭州)网络有限公司 位姿确定方法、位姿确定装置、介质和计算设备
CN111078003A (zh) * 2019-11-27 2020-04-28 Oppo广东移动通信有限公司 数据处理方法、装置、电子设备及存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4057109A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113377205A (zh) * 2021-07-06 2021-09-10 浙江商汤科技开发有限公司 场景显示方法及装置、设备、车辆、计算机可读存储介质

Also Published As

Publication number Publication date
CN111078003B (zh) 2021-10-22
EP4057109A1 (en) 2022-09-14
US20220245859A1 (en) 2022-08-04
EP4057109A4 (en) 2023-05-10
CN111078003A (zh) 2020-04-28

Similar Documents

Publication Publication Date Title
WO2021104037A1 (zh) 数据处理方法、装置、电子设备及存储介质
US11165837B2 (en) Viewing a virtual reality environment on a user device by joining the user device to an augmented reality session
JP5145444B2 (ja) 画像処理装置、画像処理装置の制御方法、及びプログラム
US10692288B1 (en) Compositing images for augmented reality
CN111158469A (zh) 视角切换方法、装置、终端设备及存储介质
US20160375360A1 (en) Methods, apparatuses, and systems for remote play
EP2343685B1 (en) Information processing device, information processing method, program, and information storage medium
US11288871B2 (en) Web-based remote assistance system with context and content-aware 3D hand gesture visualization
US20200043242A1 (en) Interactive method for virtual content and terminal device
JP6799017B2 (ja) 端末装置、システム、プログラム及び方法
WO2021196973A1 (zh) 虚拟内容的显示方法、装置、电子设备及存储介质
CN113885700A (zh) 远程协助方法及装置
CN110737414A (zh) 交互显示方法、装置、终端设备及存储介质
JP2011243019A (ja) 画像表示システム
US11430178B2 (en) Three-dimensional video processing
CN113411537B (zh) 视频通话方法、装置、终端及存储介质
CN113676720A (zh) 多媒体资源的播放方法、装置、计算机设备及存储介质
CN111198609A (zh) 交互显示方法、装置、电子设备及存储介质
CN111913560A (zh) 虚拟内容的显示方法、装置、系统、终端设备及存储介质
US20230415040A1 (en) Image generation apparatus, image generation method, and program
KR101986227B1 (ko) 360도 vr 영상 간의 정합 장치 및 방법
CN114299581A (zh) 一种人体动作展示的方法、装置、设备及可读存储介质
JP2018073061A (ja) 三次元動画生成装置、三次元動画データの配信装置、三次元動画生成方法、及び三次元動画生成プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20891862

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020891862

Country of ref document: EP

Effective date: 20220608

NENP Non-entry into the national phase

Ref country code: DE