CN115119135A - Data sending method, receiving method and device - Google Patents

Data sending method, receiving method and device Download PDF

Info

Publication number
CN115119135A
CN115119135A CN202110249946.1A CN202110249946A CN115119135A CN 115119135 A CN115119135 A CN 115119135A CN 202110249946 A CN202110249946 A CN 202110249946A CN 115119135 A CN115119135 A CN 115119135A
Authority
CN
China
Prior art keywords
anchor point
target
information
data
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110249946.1A
Other languages
Chinese (zh)
Inventor
殷佳欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Petal Cloud Technology Co Ltd
Original Assignee
Petal Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Petal Cloud Technology Co Ltd filed Critical Petal Cloud Technology Co Ltd
Priority to CN202110249946.1A priority Critical patent/CN115119135A/en
Publication of CN115119135A publication Critical patent/CN115119135A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

The application discloses a data sending method, a data receiving method and a device, wherein the sending method comprises the following steps: the method comprises the steps that network equipment acquires position information and movement attribute information of first terminal equipment at a first moment, wherein the movement attribute information comprises the movement speed and direction of the first terminal equipment; determining a target area according to the position information and the movement attribute information of the first terminal device, wherein the target area is a predicted area which the first terminal device arrives at a second moment, the network device acquires first data corresponding to the target area and sends the first data to the first terminal device, the first data comprises a target anchor point set and a target object set, and the target anchor point set is used for marking at least one 3D object to form the target object set; according to the method, the network equipment issues the anchor point information and the 3D object corresponding to the target area to the terminal equipment in advance, so that the time of a user waiting for loading the 3D object is saved, and the user experience is improved.

Description

Data sending method, receiving method and device
Technical Field
The present application relates to the field of terminals, and in particular, to a data sending method, a data receiving method, and a device based on an anchor point.
Background
Anchor point (anchor) can be used to the position and the size of location object, and in the Augmented Reality (AR) environment, the user discerns the characteristic point set of object before the camera through cell-phone terminal's camera, and this characteristic point set can only mark the characteristic of real environment, and the cell-phone utilizes the camera to catch and sends for Cloud network equipment after the anchor point information, for example Cloud ware or AR high in the clouds (Cloud), and the position and the gesture of cell-phone can be confirmed according to this anchor point information to Cloud network equipment. Meanwhile, other virtual objects can be loaded based on the anchor point information, and the positions and postures of the virtual objects presented in the real environment are set, so that a user can obtain more information of the currently scanned object. Among them, the virtual object is a three-dimensional (3D) object developed by a developer through a tool. The position and pose of the 3D object depends on the anchor point. The mobile phone firstly identifies at least one anchor point through the camera, and then loads the 3D object (or the 3D object image) on the display screen of the mobile phone according to the anchor points, so that the position and the posture of the 3D object are fixed relative to all the anchor points, and the effect of projecting the 3D object to the real environment is achieved.
The cloud server or the AR cloud terminal maintains the received anchor point information from the terminal equipment, and the stored anchor point information can also facilitate different mobile phone users to share the 3D object with each other. For example, the user a recognizes an anchor point through a mobile phone, places a 3D object depending on the anchor point, and sends the anchor point and a feature point set of the 3D object to a Cloud server or an AR Cloud. After the user B identifies the same anchor point through another mobile phone, the 3D object or the 3D object image placed by the user A can be loaded from the cloud, and therefore sharing of the 3D object is achieved.
However, in the process of implementing 3D object sharing by the user B, the following steps are required: the method comprises a series of method steps of mobile phone scanning anchor points, identifying anchor point information and object characteristic information, uploading anchor point information and characteristic information, cloud server matching anchor points, issuing anchor points and 3D objects and the like, and further can lead a user to wait for a long time, such as at least 5 seconds(s), before the currently browsed object, so that the 3D object image corresponding to the object can be obtained, and the user experience is poor.
Disclosure of Invention
The application provides a data sending method, a data receiving method and a data sending device, which are used for reducing the waiting time for a user to load a 3D object image. Specifically, the application discloses the following technical scheme:
in a first aspect, the present application provides a data transmission method, including: the method comprises the steps that network equipment acquires position information and movement attribute information of first terminal equipment at a first moment, wherein the movement attribute information comprises the movement speed and direction of the first terminal equipment; the network equipment determines a target area according to the position information and the movement attribute information of the first terminal equipment, wherein the target area is an area which is predicted to be reached by the first terminal equipment at a second moment, and the second moment is the next moment of the first moment; and the network equipment acquires first data corresponding to the target area and sends the first data to the first terminal equipment.
Wherein the first data comprises a set of target anchors and a set of target objects, the target region comprising at least one anchor, the set of target anchors consisting of the at least one anchor, the at least one anchor being used for marking at least one 3D object of the target region, the at least one 3D object constituting the set of target objects.
According to the method provided by the embodiment, the cloud server predicts the target area to be reached by the user in advance by using the position and the movement attribute of the terminal equipment, and then sends the anchor point information and the 3D object corresponding to the target area to the terminal equipment in advance, so that the terminal equipment loads the 3D object information to be displayed in advance, the user is prevented from starting to scan the current environment on the target area, and through a series of operation processes of identifying the environmental characteristic information, uploading the characteristic information, successfully matching the anchor point at the cloud end, loading and the like, the time of waiting for the 3D object to be loaded by the user is saved, and the user experience is improved.
With reference to the first aspect, in a possible implementation manner of the first aspect, the acquiring, by the network device, first data corresponding to the target area includes: the network equipment acquires the corresponding relation between at least one area and at least one anchor point set, wherein the at least one area comprises the target area; the network equipment searches a target anchor point set associated with the target area in the corresponding relation; the network device determines the set of target objects according to the 3D object marked by each anchor in the set of target anchors.
The implementation mode utilizes at least one target anchor point to mark the 3D object, and then can utilize the scanned target anchor point to determine the target object set, thereby improving the searching efficiency.
With reference to the first aspect, in another possible implementation manner of the first aspect, after the network device obtains the first data corresponding to the target area, the method further includes: the network equipment screens out second data from the first data according to the context information of the first terminal equipment, wherein the second data comprises the target anchor point set and a part of the target object set; the context information includes: one or more of a user identification, a device type, a device capability, and a cache size.
The network device sending the first data to the first terminal device, including: and the network equipment sends the second data to the first terminal equipment.
With reference to the first aspect, in yet another possible implementation manner of the first aspect, the screening, by the network device, second data from the first data according to the context information of the first terminal device includes one or more of the following:
the network equipment deletes the 3D object which is not provided with the access right by the user from the target object set according to the user identification in the context information to obtain a residual 3D object set, wherein the user identification is used for indicating whether the user has the access right to each 3D object;
or, the network device screens out a 3D object set suitable for the device type from the target object set according to the device type in the context information;
or the network device screens out a 3D object set suitable for the device capability from the target object set according to the device capability in the context information, wherein the device capability includes the detail degree of rendering the 3D object by the device;
or, the network device screens out a 3D object set, whose storage capacity does not exceed the cache size of the first terminal device, from the target object set according to the cache size in the context information.
The implementation mode utilizes the context information of the first terminal device to further screen the searched target anchor point set and the target object set, and eliminates 3D objects which do not meet the user characteristics or have contexts, thereby saving storage space, reducing transmission delay and further improving user experience.
With reference to the first aspect, in yet another possible implementation manner of the first aspect, the acquiring, by the network device, the location information of the first terminal device at the first time includes: the network equipment receives first anchor point information sent by the first terminal equipment at the first moment; and the network equipment searches whether the first anchor point information is stored in an anchor point database, and if so, determines the position information of the first terminal equipment according to the first anchor point information, wherein the anchor point database comprises the anchor point information of at least one anchor point.
The first anchor point information can be quickly searched out through the anchor point database stored in the network equipment, so that the position of the first terminal equipment is determined, and the data searching and sending efficiency is improved.
It should be understood that the location information of the first terminal device may also be obtained in other ways, such as in real time by GPS positioning technology.
With reference to the first aspect, in yet another possible implementation manner of the first aspect, the acquiring, by the network device, a correspondence between at least one area and at least one anchor point set includes: the network equipment receives scanning information sent by second terminal equipment, wherein the scanning information comprises anchor point information of at least one anchor point scanned in the current area by the second terminal equipment and a 3D object marked by each anchor point; and the network equipment acquires the corresponding relation between the current area and the anchor point information of at least one anchor point of the current area according to the scanning information.
In addition, the network device is further included to store the corresponding relationship.
In the implementation mode, the surrounding environment is scanned in advance and anchor point information is set, so that reference basis is improved for marking the 3D object, and preparation is made for subsequent data searching and sending.
In a second aspect, the present application further provides a data receiving method, where the method includes: the method comprises the steps that first terminal equipment receives first data sent by network equipment, and when the first terminal equipment enters a target area and scans a target anchor point set in the first data, the target object set is displayed on the first terminal equipment.
Wherein the first data comprises a target anchor point set and a target object set, the target anchor point set is composed of at least one anchor point in a target area, the at least one anchor point is used for marking at least one 3D object of the target area, the at least one 3D object is composed of the target object set, and the target area is a predicted area reached by the first terminal device at the second moment.
Optionally, in a possible implementation manner, the first terminal device obtains at least one anchor point information by scanning the current surrounding environment, compares whether the anchor point information matches anchor point information of a target device stored in advance, and if so, determines at least one target 3D object according to a correspondence between a target anchor point set and a target object set established in advance.
With reference to the second aspect, in a possible implementation manner of the second aspect, before the receiving, by the first terminal device, first data sent by a network device, the method further includes: the first terminal equipment scans an external environment to obtain first anchor point information, wherein the first anchor point information comprises anchor point information of at least one anchor point in the scanned external environment; and the first terminal equipment sends the first anchor point information to the network equipment at the first moment, wherein the first anchor point information is used for determining the position information of the first terminal equipment.
With reference to the second aspect, in another possible implementation manner of the second aspect, the method further includes: the first terminal device sends context information of the first terminal device to the network device, wherein the context information is used for screening out second data from the first data, and the second data comprises the target anchor point set and a part of the target object set.
With reference to the second aspect, in yet another possible implementation manner of the second aspect, the context information includes: one or more of user identification, device type, device capabilities, and cache size;
if the context information includes a user identifier, the portion of the set of target objects in the second data includes: deleting the 3D objects which do not have the access right of the user in the target object set, and obtaining a residual 3D object set, wherein the user identification is used for indicating whether the user has the access right to each 3D object;
if the context information includes a device type, the portion of the set of target objects in the second data includes: a set of 3D objects in the set of target objects that fit the device type;
if device capabilities are included in the context information, the portion of the set of target objects in the second data comprises: a set of 3D objects in the set of target objects that fit the device capabilities, the device capabilities including a degree of detail of a device rendering the 3D objects;
if the context information includes a cache size, a portion of the set of target objects in the second data includes: the storage capacity in the target object set does not exceed the buffer size of the first terminal device.
In a third aspect, the present application further provides an apparatus for data transmission, where the apparatus is applicable to a network device, and the apparatus includes:
the mobile terminal device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring position information and mobile attribute information of a first terminal device at a first moment, and the mobile attribute information comprises the mobile speed and the mobile direction of the first terminal device; the mobile terminal device comprises a processing unit used for determining a target area according to the position information and the mobile attribute information of the first terminal device and acquiring first data corresponding to the target area, and a sending unit used for sending the first data to the first terminal device.
The target area is an area which is predicted to be reached by the first terminal device at a second time, the second time is a next time of the first time, the first data comprises a target anchor point set and a target object set, the target area comprises at least one anchor point, the target anchor point set is composed of the at least one anchor point, the at least one anchor point is used for marking at least one 3D object of the target area, and the at least one 3D object is composed of the target object set.
With reference to the third aspect, in a possible implementation manner of the third aspect, the processing unit is further configured to obtain a correspondence between at least one area and at least one anchor point set, and search a target anchor point set associated with the target area in the correspondence; determining the target object set according to the 3D object marked by each anchor point in the target anchor point set; the at least one region includes the target region.
With reference to the third aspect, in another possible implementation manner of the third aspect, the processing unit is further configured to, after obtaining first data corresponding to the target area, screen second data from the first data according to context information of the first terminal device, where the second data includes the target anchor point set and a part of the target object set; the context information includes: one or more of user identification, device type, device capabilities, and cache size; the sending unit is further configured to send the second data to the first terminal device.
With reference to the third aspect, in yet another possible implementation manner of the third aspect, the processing unit is further configured to delete, according to a user identifier in the context information, a 3D object that the user does not have access right from the target object set, and obtain a remaining 3D object set, where the user identifier is used to indicate whether the user has access right to each 3D object;
or screening a 3D object set suitable for the equipment type from the target object set according to the equipment type in the context information;
or screening a 3D object set suitable for the equipment capability from the target object set according to the equipment capability in the context information, wherein the equipment capability comprises the detail degree of 3D object rendering by equipment;
or screening out a 3D object set with the storage capacity not exceeding the cache size of the first terminal device from the target object set according to the cache size in the context information.
With reference to the third aspect, in yet another possible implementation manner of the third aspect, the apparatus further includes a receiving unit, where the receiving unit is further configured to receive first anchor point information sent by the first terminal device at the first time; the processing unit is further configured to search in an anchor point database whether to store the first anchor point information, and if so, determine the location information of the first terminal device according to the first anchor point information, where the anchor point database includes anchor point information of at least one anchor point.
With reference to the third aspect, in yet another possible implementation manner of the third aspect, the receiving unit is further configured to receive scanning information sent by the second terminal device, where the scanning information includes anchor point information of at least one anchor point scanned in a current area by the second terminal device and a 3D object marked by each anchor point; the processing unit is further configured to obtain, according to the scanning information, a correspondence between the current region and anchor point information of at least one anchor point of the current region.
In a fourth aspect, the present application further provides a data receiving apparatus, which is applicable to a first terminal device, such as a UE, and includes: the processing unit is used for displaying the target object set on the first terminal equipment when the first terminal equipment enters the target area and scans the target anchor point set in the first data.
Wherein the first data comprises: the target anchor point set comprises at least one anchor point in a target area, the at least one anchor point is used for marking at least one 3D object of the target area, the at least one 3D object forms the target object set, and the target area is an area which is predicted to be reached by the first terminal equipment at a second moment.
With reference to the fourth aspect, in a possible implementation manner of the fourth aspect, the method further includes a sending unit,
the processing unit is further configured to scan an external environment to obtain first anchor point information before the receiving unit receives the first data, where the first anchor point information includes anchor point information of at least one anchor point included in the scanned external environment; the sending unit is configured to send the first anchor point information to the network device at the first time, where the first anchor point information is used to determine location information of the first terminal device.
With reference to the fourth aspect, in another possible implementation manner of the fourth aspect, the sending unit is configured to send context information of the first terminal device to the network device, where the context information is used to screen out second data from the first data, and the second data includes the target anchor point set and a part of the target object set.
Optionally, the context information includes: one or more of user identification, device type, device capabilities, and cache size;
with reference to the fourth aspect, in yet another possible implementation manner of the fourth aspect, the processing unit is further configured to delete, if the context information includes a user identifier, the 3D object that the user does not have access right from the target object set, and obtain a remaining 3D object set, where the user identifier is used to indicate whether the user has access right to each 3D object; if the context information comprises a device type, a 3D object set suitable for the device type is selected from the target object set; if the context information comprises device capabilities, a 3D object set which is suitable for the device capabilities is selected from the target object set, and the device capabilities comprise the detail degree of 3D objects rendered by the device; and if the context information comprises the cache size, the storage capacity in the target object set does not exceed the cache size of the first terminal equipment.
In a fifth aspect, the present application further provides a data transmission system, where the system includes at least one terminal device and a network device, where the at least one terminal device includes a first terminal device.
Further, the first terminal device includes the data receiving apparatus in any one of the foregoing fourth aspect and fourth aspect; the network device comprises the data transmission apparatus in any one of the embodiments of the third aspect and the third aspect.
In addition, the system further comprises a second terminal device, wherein the second terminal device sends scanning information to the network device, and the scanning information comprises anchor point information of at least one anchor point scanned in the current area by the second terminal device and the 3D object marked by each anchor point; and the network equipment receives the scanning information sent by the second terminal equipment, and acquires the corresponding relation between the current area and the anchor point information of at least one anchor point of the current area according to the scanning information.
In a sixth aspect, the present application further provides a communications apparatus comprising at least one processor and a memory, the memory configured to store the at least one processor-provided instructions; the at least one processor is configured to execute the instructions to implement the methods in the foregoing first aspect and various implementation manners of the first aspect, or to implement the methods in the foregoing second aspect and various implementation manners of the second aspect.
Optionally, the processor and the memory may be integrated into a chip system or a chip circuit, and the chip system or the chip circuit further includes an input/output interface, where the input/output interface is used to implement communication between the chip system/chip circuit and other external modules.
Optionally, the processor is a logic circuit.
Optionally, the network device or the network node is a Cloud server, a server, or an AR Cloud.
In a seventh aspect, the present application further provides a computer-readable storage medium, which stores instructions such that when the instructions are executed on a computer or a processor, the instructions can be used to execute the method in the foregoing first aspect and various implementation manners of the first aspect, or execute the method in the foregoing second aspect and various implementation manners of the second aspect.
Furthermore, the present application also provides a computer program product comprising computer instructions which, when executed by a computer or a processor, may implement the method of the foregoing first aspect and various implementations of the first aspect, and the method of the foregoing second aspect and various implementations of the second aspect.
It should be noted that, beneficial effects corresponding to technical solutions of various implementation manners of the second aspect to the seventh aspect are the same as the beneficial effects of the first aspect and the various implementation manners of the first aspect, and for specific reference, beneficial effect descriptions in the various implementation manners of the first aspect and the first aspect are referred to, and are not described again.
Drawings
Fig. 1 is a schematic structural diagram of a wireless communication system provided in the present application;
fig. 2 is a schematic structural diagram of a terminal device provided in the present application;
FIG. 3 is a schematic diagram of a process for obtaining object location information and sharing virtual objects by Google corporation according to the present application;
fig. 4 is a flowchart of a data transmission method provided in the present application;
FIG. 5 is a schematic diagram of an indoor venue illustrating a target area for predicting a user's movement at a next time;
fig. 6 is a flowchart of another data transmission method provided in the present application;
FIG. 7 is a schematic view of a multi-purpose casino provided herein;
fig. 8 is a signaling flow chart of a data transmission method provided in the present application;
fig. 9 is a schematic structural diagram of a data transmission apparatus provided in the present application;
fig. 10 is a schematic structural diagram of a network device provided in the present application.
Detailed Description
In order to make the technical solutions in the embodiments of the present application better understood by those skilled in the art, the technical solutions in the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Before describing the technical solution of the embodiment of the present application, an application scenario of the embodiment of the present application is first described with reference to the drawings. The technical solution of the present application may be applied to various communication systems, such as Wireless Local Area Network (WLAN), global system for mobile communications (GSM) systems, code division multiple access (CDA) systems, Wideband Code Division Multiple Access (WCDMA) systems, General Packet Radio Service (GPRS), Long Term Evolution (LTE) systems, LTE Frequency Division Duplex (FDD) systems, LTE Time Division Duplex (TDD), universal mobile communication system (universal mobile telecommunications system, UMTS), Worldwide Interoperability for Microwave Access (WiMAX), future radio Network (NR 5, WiMAX) systems, and so on. As shown in fig. 1, any of the above communication systems includes: the system comprises terminal equipment and cloud network equipment carried by at least one user, at least one anchor point, and an object and an environment positioned by the anchor point.
The environment includes: the public viewing scenes of exhibitions, museums, exhibition halls, venues and the like can also comprise a car networking, such as scenes in the field of vehicle to outside information exchange (V2X), automatic driving and unmanned driving, such as parking lots, shopping malls and the like.
The terminal device may be a portable device, such as a smart terminal, a mobile phone, a notebook computer, a tablet computer, a Personal Computer (PC), a Personal Digital Assistant (PDA), a foldable terminal, a wearable device with a wireless communication function (e.g., a smart watch or a bracelet), a user equipment (user device) or a User Equipment (UE), a smart home device (e.g., a television), an automobile, a motorcycle helmet, a vehicle-mounted computer, a game console, and an Augmented Reality (AR) \\ Virtual Reality (VR) device, and the embodiments of the present application do not limit specific device forms of the terminal device. In addition, the various terminal devices include, but are not limited to, an apple loader (IOS), Android (Android), Microsoft (Microsoft), or other operating systems.
Fig. 2 is a schematic diagram of a hardware structure of a terminal device according to an embodiment of the present application. As shown in fig. 2, the terminal device 100 may include a processor 110, a memory 120, a Universal Serial Bus (USB) interface 130, a radio frequency circuit 140, a mobile communication module 150, a wireless communication module 160, a camera 170, a display screen 180, a SIM card interface 190, a touch sensor 200, a pressure sensor 210, keys 220, and the like.
Among other things, processor 110 may include one or more processing units, such as: the processor 110 may include an Application Processor (AP), a modem processor, a Graphics Processing Unit (GPU), an Image Signal Processor (ISP), a video codec, a Digital Signal Processor (DSP), a baseband processor, and/or a neural Network Processor (NPU), among others. The different processing units may be independent devices, or may be integrated into one or more processors, for example, a system on a chip (SoC). A memory may also be provided in processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that have just been used or recycled by the processor 110.
In some embodiments, processor 110 may include one or more interfaces. The interface may include an integrated circuit (I2C) interface, an integrated circuit built-in audio (I2S) interface, a Pulse Code Modulation (PCM) interface, a universal asynchronous receiver/transmitter (UART) interface, a Mobile Industry Processor Interface (MIPI), a general-purpose input/output (GPIO) interface, a Subscriber Identity Module (SIM) interface 190 and/or a USB interface 130, and the like.
Memory 120 may be used to store computer-executable program code, which includes instructions. The memory 120 may include a program storage area and a data storage area. The storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, and the like), and the like. The storage data area may store data (e.g., audio data, a phonebook) created during use of the terminal device 100, and the like. Furthermore, the memory 120 may include one or more memory units, for example, a volatile memory (volatile memory), such as a Random Access Memory (RAM), and a non-volatile memory (NVM), such as a read-only memory (ROM), a flash memory (flash memory), and the like. The processor 110 executes various functional applications of the terminal device 100 and data processing by executing instructions stored in the memory 120 and/or instructions stored in a memory provided in the processor.
The wireless communication function of the terminal device 100 may be implemented by the radio frequency circuit 140, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The radio frequency circuit 140 may include at least one antenna 141 for transmitting and receiving electromagnetic wave signals. Each antenna in terminal device 100 may be used to cover a single or multiple communication bands. In some embodiments, the antenna may be used in conjunction with a tuning switch.
The mobile communication module 150 may provide a solution including wireless communication of 2G/3G/4G/5G, etc. applied to the terminal device 100. The mobile communication module 150 may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like. The mobile communication module 150 may receive electromagnetic waves from the antenna 141, filter, amplify, etc. the received electromagnetic waves, and transmit the electromagnetic waves to the modem processor for demodulation. The mobile communication module 150 may also amplify the signal modulated by the modem processor, and convert the signal into electromagnetic wave through the antenna 141 to radiate the electromagnetic wave. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the processor 110. In some embodiments, at least some of the functional modules of the mobile communication module 150 may be disposed in the same device as at least some of the modules of the processor 110.
The modem processor may include a modulator and a demodulator. The modulator is used for modulating a low-frequency baseband signal to be transmitted into a medium-high frequency signal. The demodulator is used for demodulating the received electromagnetic wave signal into a low-frequency baseband signal. The demodulator then passes the demodulated low frequency baseband signal to a baseband processor for processing. The low frequency baseband signal is processed by the baseband processor and then passed to the application processor. The application processor outputs sound signals through audio devices (including but not limited to speakers, headphones, etc.) or displays images or video through the display screen 180. In some embodiments, the modem processor may be a stand-alone device. In other embodiments, the modem processor may be provided in the same device as the mobile communication module 150 or other functional modules, independent of the processor 110.
The wireless communication module 160 may include a wireless fidelity (WiFi) module, a Bluetooth (BT) module, a GNSS module, a Near Field Communication (NFC) module, an Infrared (IR) module, and the like. The wireless communication module 160 may be one or more devices integrating at least one of the modules described above. The wireless communication module 160 receives electromagnetic waves via the antenna 141, performs frequency modulation and filtering processing on electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 can also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it into electromagnetic waves via the antenna 141 to radiate it.
In the embodiment of the present application, the wireless communication function of the terminal device 100 may include, for example, global system for mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), Long Term Evolution (LTE), new radio interface (5G NR), GNSS, WLAN, FM, BT, and/or NFC IR. GNSS may include Global Positioning System (GPS), global navigation satellite system (GLONASS), beidou satellite navigation system (BDS), quasi-zenith satellite system (QZSS), and/or Satellite Based Augmentation System (SBAS).
The camera 170 is used to capture still images or video. The camera 170 includes a lens and a photosensitive element, and an object generates an optical image through the lens and is projected onto the photosensitive element. The photosensitive element may be a Charge Coupled Device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor. The light sensing element converts the optical signal into an electrical signal, which is then passed to the ISP where it is converted into a digital image signal. And the ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signals into image signals in standard RGB, YUV, RYYB and other formats. In some embodiments, the terminal device 100 may include 1 or N cameras 170, N being a positive integer greater than 1.
The NPU is a neural-network (NN) computing processor that processes input information quickly by using a biological neural network structure, for example, by using a transfer mode between neurons of a human brain, and can also learn by itself continuously. The NPU can implement applications such as intelligent recognition of the terminal device 100, for example: image recognition, face recognition, voice recognition, and the like.
The display screen 180 is used to display images, videos, and the like. The display screen 180 includes a display panel. The display panel may be a Liquid Crystal Display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (active-matrix organic light-emitting diode), a flexible light-emitting diode (FLED), a MiniLED, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (QLED), or the like. In some embodiments, the terminal device 100 may include 1 or N display screens 180, N being a positive integer greater than 1.
Touch sensor 200 is also referred to as a "touch device". The touch sensor 200 may be disposed on the display screen 180, and the touch sensor 200 and the display screen 180 form a touch screen, which is also called a "touch screen". The touch sensor 200 is used to detect a touch operation applied thereto or nearby. The touch sensor can communicate the detected touch operation to the application processor to determine the touch event type. Visual output associated with the touch operation may be provided through the display screen 180. In other embodiments, the touch sensor 200 may be disposed on the surface of the terminal device 100, at a position different from that of the display screen 180. The pressure sensor 210 is used to measure the pressure value of the user's touch screen. In addition, other sensors may also be included, such as a gyroscope sensor, an acceleration sensor, a temperature sensor, and the like.
The keys 220 include a power-on key, a volume key, and the like. The keys 220 may be mechanical keys. Or may be touch keys. The terminal device 100 may receive a key input, and generate a key signal input related to user setting and function control of the terminal device 100.
It should be understood that the exemplary structure of the embodiments of the present application does not constitute a specific limitation to the terminal device. In other embodiments of the present application, a terminal device may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components may be used. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
In addition, the network device includes, but is not limited to, a Cloud server, a data center, a computing unit, an AR Cloud, and the like, and the function of the network device may be the same as or different from the structure of the terminal device 100 shown in fig. 2, and the structure and the form of the network device are not particularly limited in this embodiment.
Embodiments of cloud anchor point scanning and virtual object sharing are described below.
Google (Google)
Figure BDA0002965584600000091
) Companies have proposed a method for acquiring object position information and implementing virtual object sharing. The AR Core of Google is an AR toolkit for mobile application developers and users, and includes a cloud anchor (CloudAnchor) function, which allows the developers to upload anchor information to the cloud, and meanwhile, the stored anchors are matched from the cloud through feature information, so that sharing of position information and virtual objects is achieved. Referring to fig. 3, the specific method includes:
the method comprises the following steps: the user a recognizes the environment feature information through the mobile phone, and generates anchor point information, for example, including at least one anchor point included in the current scanning environment, a position coordinate of each anchor point, and the like.
Step two: and the user A uploads the anchor point information to the cloud by using the mobile phone.
Step three: user A places a virtual object, such as a 3D object, based on an anchor point, and uploads the 3D object based on the anchor point to the cloud.
Step four: and the user B uses the other mobile phone to scan and identify the environment in the same environment to obtain the environment characteristic information.
Step five: and the user B uploads the scanned environment characteristic information to the cloud end by using the mobile phone, the cloud end matches the cloud anchor point according to the characteristic information, and issues the anchor point and the 3D object to the mobile phone of the user B.
Step six: and after the mobile phone of the user B receives the information, displaying the 3D object created by the user A according to the anchor point, and realizing the sharing of the 3D object.
In the above method, when the user B wants to share the 3D object, it needs to start with scanning the current environment (step four), and through a series of processes of identifying the environment feature information, uploading the feature information, successfully matching the anchor with the cloud, issuing the anchor and the 3D object to the mobile phone of the user B, and the like, it often needs to spend a long time waiting for more than 5 seconds, for example, the waiting time is as long as more than 5 seconds, and the experience of the user B is poor.
In order to reduce the waiting time of the user and improve the user experience, the present embodiment provides a method for sending anchor point information, as shown in fig. 4, where the method is performed by a network device, such as a Cloud server or an AR Cloud, and specifically, the method includes:
101: the network equipment acquires the position information and the movement attribute information of the first terminal equipment at a first moment.
The position information of the first terminal device is a coordinate position of the first terminal device at the current first moment, the movement attribute information represents the movement capability of the first terminal device, the movement attribute information of the terminal device includes a movement speed and a movement direction, and specifically, the movement speed depends on a movement state of the user when using the device, such as walking, riding, driving and the like. The speed and direction of movement may be configured by the network device according to the current state of the user. For example, if the user holds the first terminal device to walk around an exhibit in a museum, the moving speed of the first terminal device can be determined according to the average walking speed of people indoors, and the moving direction can be measured according to the gyroscope sensor of the first terminal device.
Optionally, in an implementation manner, the first time may be when the network device detects that the first terminal device carried by the user enters the exhibition area, or may also be when the network device detects that the first terminal device enters a first anchor point coverage area in the first area, the obtaining operation in step 101 is executed.
Or, in another possible implementation manner, the user starts the scanning function of the first terminal device, for example, starts the Cloud Anchor function APP, and starts to scan the currently visited exhibited item/article, the first terminal device may automatically send an instruction to the network device in the Cloud, the first time is when the network device receives the instruction sent from the first terminal device, which indicates that the user starts the scanning function to perform the exhibited item/article scanning operation, and at this time, the foregoing step 101 is performed.
It should be understood that the first time may also be other time nodes, that is, the first time may be any time before the user moves to the target area or scans the environment of the target area.
In addition, in step 101, the network device may communicate with a first terminal device through any one or more of GPS, WiFi, bluetooth, and the like, to obtain location information and movement attribute information of the first terminal device. In a possible implementation manner, a first terminal device captures environmental characteristic information by using a camera, and then uploads the environmental characteristic information to a network device, and the network device determines the position information of the first terminal device through the environmental characteristic information after receiving the environmental characteristic information. The location information of the first terminal device may be an outdoor location or an indoor location.
102: and the network equipment determines a target area according to the position information and the movement attribute information of the first terminal equipment, wherein the target area is an area which is predicted to be reached by the first terminal equipment at a second moment, and the second moment is the next moment of the first moment.
For example, a network device, such as a cloud server, divides a space into a plurality of regions in advance, each region including one or more anchor points. As shown in fig. 5, the indoor scene of an exhibition hall is divided into 6 areas, namely a first area to a sixth area, each area displays at least one exhibit, and the position and the size of each exhibit can be calibrated by at least one anchor point. Such as the object 1 being located in a first area, the size and position of the object 1 being represented by at least one anchor point (not shown in fig. 5).
The target area may be a preset position, for example, when the user browses a museum, browsing may be performed according to a route of the museum, and the target area may be determined according to a route plan of the museum. When the target area does not have a planned route, the target area may be one or more according to at least one path between the user from the current location to the final location.
Further, the network device determines an area to which the first user may move at the second time according to the position, the moving speed, and the moving direction of the first terminal device, such as a mobile phone, carried by the user, and sets the area as the target area.
103: and acquiring first data corresponding to the target area, wherein the first data comprises a target anchor point set and a target object set.
Wherein the target area includes at least one anchor point, the target anchor point set is composed of the at least one anchor point, the at least one anchor point is used for marking at least one 3D object of the target area, and the at least one 3D object constitutes the target object set.
In addition, before step 103, the method further includes: the network equipment acquires a target anchor point set and a target object set corresponding to a target area. One embodiment comprises: the network equipment acquires the corresponding relation between at least one area and at least one anchor point set, wherein the at least one area comprises a target area and at least one target anchor point.
In step 102, after obtaining the target area, the network device searches a target anchor point set associated with the target area in the corresponding relationship; and determining the target object set according to the 3D object marked by each anchor point in the target anchor point set.
For example, the target area is a second area, and the second area includes two anchor points, that is, anchor point 1 and anchor point 2, where anchor point 1 is used to mark object 21, and anchor point 2 is used to mark object 22, and then the target anchor point set is determined to be { anchor point 1, anchor point 2}, and the target object set is determined to be { object 21, object 22 }.
If the first terminal device is a mobile phone, the user holds the mobile phone to walk indoors, the cloud server can determine a target area to which the user is about to enter according to the current position, moving speed and moving direction of the user, and then the cloud server screens out related information of at least one anchor point and a 3D object existing in the target area to obtain the target anchor point set and the target object set. If the first terminal device is a vehicle, the cloud server determines that the vehicle is about to enter a target street according to the current position, the moving direction and the moving speed of the vehicle, and then screens out a target anchor point set and a target object set corresponding to the target street in advance.
In addition, the objects in the target object set are 3D objects, and each 3D object in the target object set contains all information forming the 3D object, such as information and data of 3D images of the object 21 and the object 22.
104: and the network equipment sends the first data to the first terminal equipment. Correspondingly, the first terminal equipment receives the first data.
105: and the first terminal equipment loads the first data and displays the target object set on the first terminal equipment.
Specifically, the first terminal device scans the current surrounding environment of the user in real time, the scanned surrounding environment includes one or more anchor points, the first terminal device matches the scanned anchor points with anchor points stored in advance, and determines whether the current user enters a target area according to a matching result. And if the first terminal equipment is matched with the target area, determining that the first terminal equipment is located in the target area. And the anchor points scanned by the first terminal are matched in real time, the matched anchor points are determined to be the target anchor points, the target anchor point set is formed by a plurality of target anchor points, and the target object set can be determined and displayed on a display screen because the first data comprises the corresponding relation between the target anchor point set and the target object set.
In an example, when the user moves to the target area, a 3D image of the target object is automatically displayed on the display screen of the user's mobile phone, for example, the object 21 is a ceramic cup, and a 3D stereoscopic image of the ceramic cup or a content of a graphic analysis about the ceramic cup is displayed on the display screen of the mobile phone. In addition, 3D image information of the object 22 can be displayed, for example, if the object 22 is a portrait, the 3D image information of the portrait, such as portrait story introduction, voice reading and other contents, can be displayed on the display screen of the mobile phone. The embodiment does not limit the display mode of the target 3D object on the display screen of the first terminal device.
According to the method provided by the embodiment, the cloud server predicts the target area to be reached by the user in advance by using the position and the movement attribute of the terminal device, and then issues the anchor point information and the 3D object corresponding to the target area to the terminal device in advance, so that the terminal device loads the 3D object information to be displayed in advance, the situation that the user starts to scan the current environment on the target area is avoided, and through a series of operation processes of identifying environment characteristic information, uploading characteristic information, successfully matching the cloud end with the anchor point, loading and the like, the time for the user to wait for loading the 3D object is saved, and the user experience is improved.
Further, in the above embodiment, before step 102, the method further includes: and the first terminal equipment sends the context information of the first terminal equipment to the network equipment. The context information of the first terminal device includes at least one of a user identifier, a device type, a device capability, and a cache size.
The user identification is used for uniquely marking the user identity using the terminal equipment. And the Cloud server or the AR Cloud determines whether the current user has access rights to different 3D objects according to the user identification. For example, the user identifier can mark whether the user identity is 'ordinary user' or 'VIP user'; or, it marks which age class the user is, such as "teenager", "middle age", "old age", etc., according to the user age, because different 3D object image information can be matched for different user identities.
A device type indicating which device the user is, including but not limited to a cell phone, Pad, PC, AR/VR glasses, car, motorcycle helmet, etc. Since different 3D objects show different effects on different user terminals, there are some 3D objects that are specifically customized for a certain device type, e.g., 3D objects customized for AR glasses.
And the device capability represents the rendering capability of the terminal device on the 3D object. The rendering capability depends on the processing capability determined by the GPU, CPU, memory, etc. of the terminal device. The higher the device capability, the richer the level of detail of the object (abbreviated as "detail") that can be rendered. On the contrary, for some 3D objects with rich details and large volume, if trying to render on a terminal device with weak rendering capability, the device may be overheated, the loading time is too long, and the capability of the terminal device needs to be distinguished and marked.
The detail degree can be represented by the resolution of a target 3D object image to be displayed, and the higher the resolution of the target 3D object image is, the clearer the image is, and the richer the detail degree is; conversely, the lower the resolution, the coarser the image detail. In addition, when configuring 3D object information by means of software, the 3D image to be displayed should be faithfully obtained as much as possible in detail, and some areas may not require faithful storage of such details, so the present example provides 3D images of different degrees of detail by means of a software module, enabling the resolution (pixels per inch) of the compressed image to be varied over the entire 3D object image area, and the desired resolution (degree of detail) for each 3D image to be determined automatically by software and/or user-controlled manner prior to compression. Based on this, the 3D object information in this embodiment may be stored in a lower resolution manner, or less important image areas may be stored in a lower resolution manner. Alternatively, sharp 3D object images can also be stored at high resolution.
The cache size indicates the cache capacity of the terminal device, that is, the size of the storage space, and if the cache capacity of the terminal device is larger, the more the data/information indicating that the terminal device can locally store the 3D object. Conversely, the smaller the caching capacity, the less the amount of 3D data/information to be stored, so different sets of target 3D objects are determined according to the cache size of the terminal device.
In this embodiment, the first terminal device may report the context information for multiple times, or may report the context information of the first terminal device at one time, and the specific reporting mode may be a wireless communication mode such as WiFi or bluetooth.
Referring to fig. 6, the above step 103: after the network device obtains the first data corresponding to the target area, the method further includes:
104': and the network equipment screens out second data from the first data according to the context information of the first terminal equipment, wherein the second data comprises the target anchor point set and a part of the target object set.
Example 1:
and the network equipment deletes the 3D object which does not have the access right of the user in the target object set according to the user identification in the context information to obtain the remaining 3D object set. For example, if the user identifier indicates that the user is an "ordinary user", the exhibition right of some 3D objects is not open to the ordinary user, but only open to the VIP user, and the information of the 3D objects that are not open is masked, so as to obtain a 3D object set that is only open to the "ordinary user", that is, the data information of the 3D objects that the VIP user can access is removed from the target object set.
Example 2:
and the network equipment screens out a 3D object set suitable for the equipment type from the target object set according to the equipment type in the context information, and deletes the 3D object which is not suitable for the first terminal equipment from the 3D object set. For example, if the device type of the first terminal device is AR glasses, only a 3D object set that can be recognized by VR glasses is reserved, and data information of 3D objects of other types of terminal devices is removed from the target object set.
Example 3:
and the network equipment screens out a 3D object set suitable for the equipment capability from the target object set according to the equipment capability in the context information, wherein the equipment capability comprises the detail degree of the equipment for rendering the 3D object. For example, if the context information indicates that the rendering capability of the first terminal device is "high", the 3D object with more details is selected as the target 3D object set, and the 3D object with blurred image features or without more details and rendering effects is masked, or the 3D object that cannot be presented and rendered on the first terminal device is deleted from the target object set, so as to provide the 3D object with high-definition and high-detail rendering effects for the user.
Example 4:
and the network equipment screens out a 3D object set of which the storage capacity does not exceed the cache size of the first terminal equipment from the target object set according to the cache size in the context information. And the network equipment determines that the residual cache space in the current first terminal equipment is smaller and all the target object set and the target anchor point set in the target area cannot be loaded according to the cache size, then the data/information of at least one 3D object far away from the first terminal equipment is removed, the data information of the 3D object close to the first terminal equipment is reserved, and the storage space occupied by the data information of the 3D object close to the first terminal equipment is smaller than the cache space size of the first terminal equipment.
In addition, the network device also determines a set of anchor points with which the 3D objects are associated. If the 3D object set determined in step 103 is large, a part of the 3D object set needs to be selected according to the buffer size of the user terminal. The cloud server may select the first several 3D objects in an order in which the first terminal device may contact the anchor point, and require that the occupied storage capacity of the 3D objects is less than the cache size.
105': and the network equipment sends the second data to the first terminal equipment.
According to the method provided by the embodiment, the network equipment further screens the target object set according to the context information of the first terminal equipment held by the user, and eliminates the 3D objects which do not accord with the user identification, the equipment type, the equipment capability and the cache size, so that the rest of the target object set after screening is more suitable for the first terminal equipment, and the user experience is improved.
In addition, the second data obtained after the first data are screened is transmitted to the first terminal device, compared with the situation that the first data are transmitted completely, the data information of the transmitted 3D object is reduced, the data amount transmitted to the terminal device is reduced, and the transmission overhead is reduced.
It should be noted that, in the step 104', in the process of determining the second data, any combination of two or more examples in the above examples 1 to 4 may be used, for example, if the context information reported to the network device by the first terminal device includes all of the user identifier, the device type, the device capability, and the cache size, when the second data is screened, it is sequentially determined whether each piece of information meets the condition of the terminal device or the user, and then an intersection is taken from all the screening results, so as to obtain the second data.
It should be understood that the context information of the terminal device may further include other parameter characteristics, and/or a filtering condition, and the embodiment does not limit the specific process of filtering a portion of the first data as the second data.
In one embodiment, shown in FIG. 7, a multi-purpose entertainment venue is provided that includes three rooms, namely a living room, a toy room, and a restroom. Wherein, a plurality of tables are arranged in the toy room, different toys are arranged on each table, a plurality of 3D toys are previously established in the toy room by an active organizer, and the position and the size of each 3D toy can be marked by an anchor point.
Referring to fig. 8, a data transmission method provided in this embodiment includes the following steps:
301: the organizer terminal device scans the current area, identifies at least one anchor point in the current area, creates 3D object information according to the at least one anchor point, and configures the 3D object information on the associated at least one anchor point.
Specifically, an organizer such as UE2 first identifies 5 anchor points on a table within a playroom through its own cell phone, the 5 anchor points being labeled as: anchor point 2, anchor point 3, anchor point 4, anchor point 5 and anchor point 6. Anchor points 1 and 7 are anchor points provided in the living room. The anchor points 2-6 are used to mark the edges, corners, decorations, etc. of 5 tables and are unique within the toy room. The UE2 sets 3D toy information, i.e. 3D toy virtual images, according to the position and posture of each anchor point based on the scanned at least one anchor point, such as the 3D toy virtual images marked by the anchor points 2 to 6 including a car galloping model, a dolphin doll, a ninja model, a doll, and a teddy bear, to obtain the correspondence as shown in table 1.
TABLE 1 correspondence between anchor points and 3D objects
Figure BDA0002965584600000131
Figure BDA0002965584600000141
As shown in fig. 7, the 3D dolphin doll is set around the anchor point 3, and the position and posture of the dolphin doll between the toys can be marked by the anchor point 3, and similarly, the dolls are set around the anchor point 5, so the position and posture of the dolls are marked by the anchor point 5. The 3D object information comprises the position, posture, size and other relevant data of the dolphin toys and the dolls among the toys. And establishes a correspondence between the 3D object information of the anchor point 3 and the dolphin toy, and the 3D object information of the anchor point 5 and the doll.
302: the UE2 sends the at least one anchor point information and the 3D object information to a cloud server. The anchor point information includes: the positions of all anchor points in the toy room and an anchor point coordinate system established by each anchor point; the 3D object information includes a correspondence between the at least one anchor point and the configured 3D object information.
Optionally, the method further includes: the UE2 sets a filtering condition for these 3D objects, for example, setting the access right of the galloping car model only allows the user identification to be "member" or "VIP user"; setting the toy to be visible with only devices with terminal device capabilities of "high"; the doll is set to be viewed only through the terminal device whose device type is "AR glasses". A customer entering a multifunctional entertainment venue can enter a 3D object placed by the organizer UE2 in a toy room by using the terminal equipment of the customer, and equipment used by different customers can be the same or different.
Correspondingly, the cloud server receives the anchor point information and the 3D object information sent from the UE 2. The correspondence between the at least one anchor point information and the 3D object information is established according to the obtained information, as shown in table 1 above.
In addition, according to the pre-divided regions and the anchor points of each region, the cloud server further establishes the corresponding relationship between at least one region and the anchor point set, and further obtains the corresponding relationship between different anchor point sets and 3D object information, for example, the multifunctional entertainment place is divided into 3 regions according to a room, namely a first region, a second region and a third region, which sequentially correspond to a living room, a toy room and a rest room. And if each region comprises at least one anchor point, establishing a corresponding relation between the anchor point set and the 3D object information. As shown in table 2, map < anchor set, 3D object information >. Wherein the 3D object information may represent all information of the 3D object.
TABLE 2 correspondence between anchor set and 3D object information
Figure BDA0002965584600000142
It should be understood that one 3D object information may pass through one anchor point mark, or may pass through two or more anchor point marks. In addition, one anchor point may be used to mark one or more 3D objects, and the embodiment does not limit the number of 3D objects to be marked, and the number of 3D objects that can be marked by each anchor point.
Optionally, each region may be further divided into smaller regions, for example, a range that can be marked by each anchor point is taken as one region, then "anchor point set 2" in table 2 may be split into 5 anchor point sets, and each anchor point set includes one anchor point. The present embodiment does not limit the dividing manner of the regions, and the number and positions of anchor points configured in each region.
303: the patron enters the multi-function casino and the handheld first terminal device UE1 scans and identifies the first set of anchor points. The first anchor point set comprises the characteristic information of the anchor point 1.
For example, UE1 is a cell phone, and a customer entering the room looks through, and when walking to the playroom, scans and identifies anchor point 1 at the doorway of the playroom.
304: the UE1 sends the first set of anchor points to the cloud server, while the UE1 also uploads the mobility attribute information, as well as context information such as user identification, device type, device capabilities, cache size, etc. In this example, it is assumed that the user identifier reported by the UE1 is a normal user, the device type is a mobile phone, the device capability is medium, the buffer size is 1G, and the mobility attribute is 0.5 m/s.
305: and the cloud server receives the first anchor point set, the mobile attribute information and the context information of the UE1 reported by the UE 1. And matching anchor points 1 in the anchor point database of the cloud server according to the characteristics of the anchor points 1 in the first anchor point set, determining the position of the UE1, and determining a target area moving at the next moment according to the movement attribute information of the UE 1.
For example, the cloud server in this example identifies that anchor point 1 is located at the inter-toy doorway, thereby determining that the location of cell phone UE1 is located at the inter-toy doorway; then, the mobile attribute of the mobile phone is combined to be 0.5m/s, the mobile phone is predicted to enter the toy room after 3 seconds, and in the example, the target area is the toy room.
306: the cloud server determines a set of target anchors and a set of 3D objects within the target area.
Specifically, the cloud server determines that the target area, i.e., the set of target anchors contained within the playroom, has anchor 2,3,4,5,6 and the corresponding set of 3D objects includes galloping car model, dolphin doll, ninja model, doll, and teddy bear.
307: and the cloud server filters the target anchor point set and the 3D object set according to the context information of the UE1, and determines the 3D object set which can be displayed by the UE1 and the corresponding anchor point set.
In particular, the cloud server determines that the customer is a regular user based on the "user identification" of the customer, and thus determines that the running vehicle model in the set of 3D objects is not visible to the customer because the running vehicle model is visible only to VIP members. In addition, according to the fact that the device type of the user terminal is a mobile phone, it can be determined that the doll is invisible to the customer because the doll is visible only to the user whose device type is AR glasses; according to the fact that the 'equipment capacity' of the terminal equipment is medium, the situation that the ninja model is invisible can be determined, and the ninja model is rich in details and can be rendered and presented only by equipment with high capacity; and according to the fact that the 'cache size' of the UE1 is 1G, the data/information of the remaining two 3D objects dolphin and teddy bear can be contained, and finally, after the galloping car model, doll and ninja model are removed from the 3D object set, the 3D object set which can be displayed by the UE1 is obtained as the { dolphin doll and teddy bear }, and the corresponding anchor point set is the { anchor points 3 and 6 }.
308: the cloud server sends the set of 3D objects and their corresponding set of anchor points to the UE 1.
In this example, the set of 3D objects sent by the cloud server to the UE1 includes dolphin and teddy bear, and the set of anchor points are anchor point 3 and anchor point 6.
In addition, the cloud server may also send all anchor point information within the toy room to the UE1, such as sending the incidental information of anchor points 2,4, 5. Wherein, the 3D object corresponding to the anchor point 2 is only visible to the user identification "member" or "VIP user"; the 3D object corresponding to the anchor point 4 requires equipment with high "equipment capability"; the 3D object corresponding to the anchor point 5 needs to be displayed with the "device type" as AR glasses.
309: after the UE1 enters the target area, the set of 3D objects is displayed on the display screen of the UE1 after identifying the anchor point.
Specifically, after the customer enters the toy booth with the handset UE1, the handset scans within the toy booth. When the anchor point 3 is identified by the mobile phone, the local and anchor points 3 are successfully matched, and the dolphin doll can be displayed on the mobile phone immediately. When the mobile phone identifies the anchor point 6, the local and anchor points 6 are successfully matched, and the 3D image of the teddy bear can be displayed on the mobile phone immediately.
When the mobile phone identifies the anchor points 2,4 and 5, corresponding prompt information is displayed. Wherein the prompt message indicates that there is a 3D object not shown here and the reason why the 3D object is not shown.
According to the method provided by the embodiment, when the user holds the terminal equipment to scan in the environment, the user experiences the real-time 3D object loading experience, the waiting time of interaction with the cloud is saved, and the user experience is improved.
Embodiments of the apparatus corresponding to the above-described embodiments of the method of the present application are described below.
Fig. 9 is a schematic structural diagram of an apparatus according to an embodiment of the present disclosure. In one embodiment, the apparatus may comprise: the device may further include more or fewer units and modules, such as a receiving unit 904 and a storage unit 905, and the structure of the device is not limited in this embodiment.
When the apparatus is a data transmission apparatus, the obtaining unit 901 is configured to obtain location information and movement attribute information of a first terminal device at a first time, where the movement attribute information includes a movement speed and a movement direction of the first terminal device. A processing unit 902, configured to determine a target area according to the location information and the mobile attribute information of the first terminal device, and acquire first data corresponding to the target area; a sending unit 903, configured to send the first data to the first terminal device.
Wherein the target area is an area which is predicted to be reached by the first terminal device at a second time, the second time is a next time of the first time, the first data includes a set of target anchors and a set of target objects, the target area includes at least one anchor, the set of target anchors is composed of the at least one anchor, the at least one anchor is used for marking at least one 3D object of the target area, and the at least one 3D object constitutes the set of target objects;
optionally, in some embodiments, the processing unit 902 is further configured to obtain a correspondence between at least one area and at least one anchor point set, and search, in the correspondence, a target anchor point set associated with the target area; and determining the target object set according to the 3D object marked by each anchor point in the target anchor point set. The at least one region includes the target region.
Optionally, in other embodiments, after acquiring the first data corresponding to the target area, the processing unit 902 is further configured to screen out second data from the first data according to the context information of the first terminal device, where the second data includes the target anchor point set and a part of the target object set. The context information includes: one or more of a user identification, a device type, a device capability, and a cache size. The sending unit 903 is further configured to send the second data to the first terminal device.
Optionally, in other embodiments, the processing unit 902 is further configured to delete, according to a user identifier in the context information, the 3D object that the user does not have access right from the target object set, and obtain a remaining 3D object set, where the user identifier is used to indicate whether the user has access right to each 3D object;
or screening a 3D object set suitable for the equipment type from the target object set according to the equipment type in the context information;
or screening a 3D object set suitable for the equipment capability from the target object set according to the equipment capability in the context information, wherein the equipment capability comprises the detail degree of the equipment for rendering the 3D object;
or screening out a 3D object set with the storage capacity not exceeding the cache size of the first terminal device from the target object set according to the cache size in the context information.
Optionally, in other embodiments, the receiving unit 904 is further configured to receive first anchor point information sent by the first terminal device at the first time; the processing unit 902 is further configured to search whether the first anchor point information is stored in an anchor point database, and if so, determine the location information of the first terminal device according to the first anchor point information, where the anchor point database includes anchor point information of at least one anchor point.
In addition, if the processing unit 902 does not find the first anchor point information, the matching fails, that is, the location information of the first terminal device cannot be determined. At this point, the user is required to rescan.
Optionally, in still other embodiments, the receiving unit 904 is further configured to receive scanning information sent by the second terminal device, where the scanning information includes anchor point information of at least one anchor point scanned in the current area by the second terminal device and the 3D object marked by each anchor point. The processing unit 902 is further configured to obtain, according to the scanning information, a corresponding relationship between the current area and anchor point information of at least one anchor point of the current area.
A storage unit 905, configured to store a corresponding relationship between the current area and anchor point information of at least one anchor point of the current area.
On the other hand, when the apparatus is a data receiving apparatus, the receiving unit 904 is configured to receive first data sent by a network device; the processing unit 902 is configured to display the set of target objects on the first terminal device when the first terminal device enters the target area and scans a set of target anchor points in the first data.
The first data comprises a target anchor point set and a target object set, the target anchor point set is composed of at least one anchor point in a target area, the at least one anchor point is used for marking at least one 3D object of the target area, the at least one 3D object is composed of the target object set, and the target area is a predicted area reached by the first terminal equipment at a second moment.
Optionally, in some embodiments, the processing unit 902 is further configured to scan an external environment to obtain first anchor point information before the receiving unit receives the first data, where the first anchor point information includes anchor point information of at least one anchor point included in the scanned external environment; the sending unit 903 is configured to send the first anchor point information to the network device at the first time, where the first anchor point information is used to determine the location information of the first terminal device.
Optionally, in other embodiments, the sending unit 903 is configured to send context information of the first terminal device to the network device, where the context information is used to screen out second data from the first data, and the second data includes the target anchor point set and a part of the target object set.
Further, the context information includes: one or more of a user identification, a device type, a device capability, and a cache size. The processing unit 902 is further configured to:
and if the context information comprises a user identifier, deleting the 3D object which does not have the access right of the user from the target object set to obtain a residual 3D object set, wherein the user identifier is used for indicating whether the user has the access right to each 3D object.
If the device type is included in the context information, a set of 3D objects that is appropriate for the device type is included in the set of target objects.
If the context information comprises device capabilities, a set of 3D objects in the set of target objects that are suitable for the device capabilities, the device capabilities comprising a level of detail of rendering of the 3D objects by the device.
And if the context information comprises the cache size, the storage capacity in the target object set does not exceed the cache size of the first terminal equipment.
In addition, the processing unit 902 is further configured to scan a surrounding environment, determine at least one anchor point, match the at least one anchor point with anchor points stored locally, determine the target anchor point set, and search the target object set corresponding to the target anchor point set in a pre-stored correspondence relationship according to the target anchor point set.
In addition, in a hardware implementation, an embodiment of the present application further provides a network device, which may be a Cloud server or an AR Cloud in the foregoing embodiment, and is used to implement the method in the foregoing embodiment.
The structure of the network device may be the same as that of the terminal device, or may be different. In some embodiments, referring to fig. 10, a schematic structural diagram of a network device is shown, where the network device may include: a processor 10, a memory 20 and at least one communication interface 30, wherein the processor 10, the memory 20 and the at least one communication interface 30 are coupled by a communication bus.
The processor 10 is a control center of a network device, and is configured to complete communication in a wireless communication system, including data transmission with at least one terminal device; as well as communications with other network devices, etc.
Further, the processor 10 may be composed of an Integrated Circuit (IC), for example, a single packaged IC, or a plurality of packaged ICs with the same or different functions connected. For example, the processor 10 may include a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Further, the processor 10 may also include a hardware chip, which may be a logic circuit, an Application Specific Integrated Circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. The PLD may be a Complex Programmable Logic Device (CPLD), a field-programmable gate array (FPGA), a General Array Logic (GAL), or any combination thereof.
The memory 20 is used for storing and exchanging various types of data or software, including location information of the terminal device, movement attribute information, a set of target anchors, a set of target objects, and the like. Further, the memory 20 may have stored therein a computer program or code.
Specifically, the memory 20 may include a volatile memory (volatile memory), such as a Random Access Memory (RAM); a non-volatile memory (non-volatile memory) may also be included, such as a flash memory (flash memory), Hard Disk Drive (HDD) or Solid-State Drive (SSD), and the memory 20 may also include a combination of the above types of memories.
Optionally, the memory 20 may be used as a storage medium, integrated in the processor 10, or configured outside the processor 10, which is not limited in this embodiment.
At least one of the communication interfaces 30 may use any transceiver or the like for communicating with other devices or communication networks, such as ethernet, WLAN, etc. Such as with the UE1 using at least one communication interface 30.
In addition, the network device further includes a mobile communication module, a wireless communication module, and the like. The mobile communication module includes: and a module of a wireless communication function. In addition, a filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like may be included. In some embodiments, at least part of the functional modules of the mobile communication module may be provided in the processor. The wireless communication module can provide a solution for wireless communication including WLAN, Bluetooth (BT), Global Navigation Satellite System (GNSS), and the like, which is applied to the switch.
It should be understood that the network device may include more or less components, and the structure illustrated in the embodiment of the present application does not form a specific limitation to the network device. And the components shown in fig. 9 or 10 may be implemented in hardware, software, firmware, or any combination thereof.
When implemented in software, may be implemented in whole or in part in the form of a computer program product. For example, the receiving unit 904 and the transmitting unit 903 in the data transmitting apparatus shown in fig. 9 may be implemented by at least one communication interface 30, the functions of the acquiring unit 901 and the processing unit 902 may be implemented by the processor 10, and the function of the storage unit 905 may be implemented by the memory 20.
In addition, the embodiment of the present application also provides a wireless communication system, which may have a structure similar to that of the foregoing fig. 1, and includes at least one terminal device, such as UE1, UE2, and a network device. The structure of the network device may be the same as that shown in fig. 10, and the structure of the terminal device may be the same as that shown in fig. 2.
Optionally, the structure of the network device may also be the same as that of the terminal device shown in fig. 2, which is not limited in this embodiment.
Embodiments of the present application also provide a computer program product comprising one or more computer program instructions. When loaded and executed by a computer, cause the flow or functions described in various embodiments above, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device.
The computer program instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, from one network node, computer, server, or data center to another node, either by wire or wirelessly.
Alternatively, in another possible implementation of the above data transmission apparatus, the apparatus may be a wireless communication apparatus or a chip in the wireless communication apparatus. Specifically, the apparatus includes: at least one input/output interface and a logic circuit. The input/output interface may be an input/output circuit. The logic circuit may be a signal processor, a chip, or other integrated circuit that may implement the methods of the present application. Wherein, at least one input/output interface is used for inputting or outputting signals or data. In addition, the input/output interface can also be used for realizing communication transmission with at least one terminal device.
The logic circuit is configured to perform part or all of the steps of any one of the methods provided in the embodiments of the present application.
In addition, in the description of the present application, "a plurality" means two or more than two unless otherwise specified. In addition, in order to facilitate clear description of technical solutions of the embodiments of the present application, in the embodiments of the present application, terms such as "first" and "second" are used to distinguish the same items or similar items having substantially the same functions and actions. Those skilled in the art will appreciate that the terms "first," "second," and the like do not denote any order or importance, but rather the terms "first," "second," and the like do not denote any order or importance.
The above-described embodiments of the present application do not limit the scope of the present application.

Claims (24)

1. A method for transmitting data, the method comprising:
the method comprises the steps that network equipment acquires position information and movement attribute information of first terminal equipment at a first moment, wherein the movement attribute information comprises the movement speed and direction of the first terminal equipment;
the network equipment determines a target area according to the position information and the movement attribute information of the first terminal equipment, wherein the target area is an area which is predicted to be reached by the first terminal equipment at a second moment, and the second moment is the next moment of the first moment;
the network device acquires first data corresponding to the target area, wherein the first data comprise a target anchor point set and a target object set, the target area comprises at least one anchor point, the target anchor point set is composed of the at least one anchor point, the at least one anchor point is used for marking at least one 3D object of the target area, and the at least one 3D object is composed of the target object set;
and the network equipment sends the first data to the first terminal equipment.
2. The method of claim 1, wherein the obtaining, by the network device, first data corresponding to the target area comprises:
the network equipment acquires the corresponding relation between at least one area and at least one anchor point set, wherein the at least one area comprises the target area;
the network equipment searches a target anchor point set associated with the target area in the corresponding relation;
the network device determines the set of target objects according to the 3D object marked by each anchor point in the set of target anchor points.
3. The method of claim 2, wherein after the network device obtains the first data corresponding to the target area, the method further comprises:
the network equipment screens out second data from the first data according to the context information of the first terminal equipment, wherein the second data comprises the target anchor point set and a part of the target object set; the context information includes: one or more of user identification, device type, device capabilities, and cache size;
the network device sending the first data to the first terminal device, including:
and the network equipment sends the second data to the first terminal equipment.
4. The method of claim 3, wherein the network device filters out second data from the first data according to the context information of the first terminal device, and wherein the method comprises:
the network equipment deletes the 3D object which is not provided with the access right by the user from the target object set according to the user identification in the context information to obtain a residual 3D object set, wherein the user identification is used for indicating whether the user has the access right to each 3D object;
or, the network device screens out a 3D object set suitable for the device type from the target object set according to the device type in the context information;
or the network device screens out a 3D object set suitable for the device capability from the target object set according to the device capability in the context information, wherein the device capability includes the detail degree of rendering the 3D object by the device;
or, the network device screens out a 3D object set, of which the storage capacity does not exceed the cache size of the first terminal device, from the target object set according to the cache size in the context information.
5. The method according to any one of claims 1 to 4, wherein the network device obtaining the location information of the first terminal device at the first time comprises:
the network equipment receives first anchor point information sent by the first terminal equipment at the first moment;
and the network equipment searches whether the first anchor point information is stored in an anchor point database, and if so, determines the position information of the first terminal equipment according to the first anchor point information, wherein the anchor point database comprises the anchor point information of at least one anchor point.
6. The method of claim 2, wherein the network device obtaining a correspondence between at least one area and at least one anchor point set comprises:
the network equipment receives scanning information sent by second terminal equipment, wherein the scanning information comprises anchor point information of at least one anchor point scanned in the current area by the second terminal equipment and a 3D object marked by each anchor point;
and the network equipment acquires the corresponding relation between the current area and the anchor point information of at least one anchor point of the current area according to the scanning information.
7. A method for receiving data, the method comprising:
the method comprises the steps that first terminal equipment receives first data sent by network equipment, wherein the first data comprise a target anchor point set and a target object set, the target anchor point set is composed of at least one anchor point in a target area, the at least one anchor point is used for marking at least one 3D object of the target area, the at least one 3D object is composed of the target object set, and the target area is an area which is predicted to be reached by the first terminal equipment at a second moment;
when the first terminal device enters the target area and scans a target anchor point set in the first data, the target object set is displayed on the first terminal device.
8. The method of claim 7, wherein before the first terminal device receives the first data sent by the network device, the method further comprises:
the first terminal equipment scans an external environment to obtain first anchor point information, wherein the first anchor point information comprises anchor point information of at least one anchor point in the scanned external environment;
and the first terminal equipment sends the first anchor point information to the network equipment at the first moment, wherein the first anchor point information is used for determining the position information of the first terminal equipment.
9. The method according to claim 7 or 8, characterized in that the method further comprises:
the first terminal device sends context information of the first terminal device to the network device, wherein the context information is used for screening out second data from the first data, and the second data comprises the target anchor point set and a part of the target object set.
10. The method of claim 9, wherein the context information comprises: one or more of user identification, device type, device capabilities, and cache size;
if the context information includes a user identifier, the portion of the set of target objects in the second data includes: deleting the 3D objects which do not have the access right of the user in the target object set, and obtaining a residual 3D object set, wherein the user identification is used for indicating whether the user has the access right to each 3D object;
if the context information includes a device type, the portion of the set of target objects in the second data includes: a set of 3D objects in the set of target objects that fit the device type;
if device capabilities are included in the context information, the portion of the set of target objects in the second data comprises: a set of 3D objects in the set of target objects that are appropriate for the device capabilities, the device capabilities including a degree of detail of a device rendering the 3D objects;
if the context information includes a cache size, a portion of the set of target objects in the second data includes: the storage capacity in the target object set does not exceed the buffer size of the first terminal device.
11. A data transmission apparatus, characterized in that the apparatus comprises:
the mobile terminal device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring position information and mobile attribute information of a first terminal device at a first moment, and the mobile attribute information comprises the mobile speed and the mobile direction of the first terminal device;
a processing unit, configured to determine a target area according to location information and mobility attribute information of the first terminal device, and acquire first data corresponding to the target area, where the target area is a predicted area that the first terminal device reaches at a second time, where the second time is a next time of the first time, and the first data includes a target anchor point set and a target object set, the target area includes at least one anchor point, the target anchor point set is composed of the at least one anchor point, the at least one anchor point is used to mark at least one 3D object in the target area, and the at least one 3D object constitutes the target object set;
a sending unit, configured to send the first data to the first terminal device.
12. The apparatus of claim 11,
the processing unit is further configured to obtain a correspondence between at least one region and at least one anchor point set, and search for a target anchor point set associated with the target region in the correspondence; determining the target object set according to the 3D object marked by each anchor point in the target anchor point set; the at least one region includes the target region.
13. The apparatus of claim 12,
the processing unit is further configured to, after obtaining first data corresponding to the target area, screen out second data from the first data according to context information of the first terminal device, where the second data includes the target anchor point set and a part of the target object set; the context information includes: one or more of user identification, device type, device capabilities, and cache size;
the sending unit is further configured to send the second data to the first terminal device.
14. The apparatus of claim 13,
the processing unit is further configured to delete the 3D object that the user does not have access right from the target object set according to a user identifier in the context information, and obtain a remaining 3D object set, where the user identifier is used to indicate whether the user has access right to each 3D object;
or screening a 3D object set suitable for the equipment type from the target object set according to the equipment type in the context information;
or screening a 3D object set suitable for the equipment capability from the target object set according to the equipment capability in the context information, wherein the equipment capability comprises the detail degree of the equipment for rendering the 3D object;
or screening out a 3D object set with the storage capacity not exceeding the cache size of the first terminal device from the target object set according to the cache size in the context information.
15. The apparatus according to any one of claims 11 to 14, further comprising a receiving unit,
the receiving unit is further configured to receive first anchor point information sent by the first terminal device at the first time;
the processing unit is further configured to search whether the first anchor point information is stored in an anchor point database, and if so, determine the location information of the first terminal device according to the first anchor point information, where the anchor point database includes anchor point information of at least one anchor point.
16. The apparatus of claim 12,
the receiving unit is further configured to receive scanning information sent by the second terminal device, where the scanning information includes anchor point information of at least one anchor point scanned in a current area by the second terminal device and a 3D object marked by each anchor point;
the processing unit is further configured to obtain, according to the scanning information, a correspondence between the current region and anchor point information of at least one anchor point of the current region.
17. A data receiving apparatus, applied to a first terminal device, the apparatus comprising:
a receiving unit, configured to receive first data sent by a network device, where the first data includes a target anchor point set and a target object set, the target anchor point set is composed of at least one anchor point in a target area, the at least one anchor point is used to mark at least one 3D object in the target area, the at least one 3D object constitutes the target object set, and the target area is an area that is predicted to be reached by the first terminal device at a second time;
and the processing unit is used for displaying the target object set on the first terminal equipment when the first terminal equipment enters the target area and scans the target anchor point set in the first data.
18. The apparatus of claim 17, further comprising a transmitting unit,
the processing unit is further configured to scan an external environment to obtain first anchor point information before the receiving unit receives the first data, where the first anchor point information includes anchor point information of at least one anchor point included in the scanned external environment;
the sending unit is configured to send the first anchor point information to the network device at the first time, where the first anchor point information is used to determine location information of the first terminal device.
19. The apparatus of claim 17 or 18,
the sending unit is configured to send context information of the first terminal device to the network device, where the context information is used to screen out second data from the first data, and the second data includes the target anchor point set and a part of the target object set.
20. The apparatus of claim 19, wherein the context information comprises: one or more of user identification, device type, device capabilities, and cache size;
the processing unit is further configured to delete the 3D object, to which the user does not have access right, in the target object set if the context information includes a user identifier, to obtain a remaining 3D object set, where the user identifier is used to indicate whether the user has access right to each 3D object;
if the context information comprises a device type, a 3D object set suitable for the device type is selected from the target object set;
if the context information comprises a device capability, a set of 3D objects in the target object set that is suitable for the device capability, wherein the device capability comprises a degree of detail of rendering of the 3D objects by the device;
and if the context information comprises the cache size, the storage capacity in the target object set does not exceed the cache size of the first terminal equipment.
21. A data transmission system, characterized in that the system comprises a terminal device and a network device,
the network device comprises the apparatus of any of claims 11 to 16;
the terminal device comprises the apparatus of any one of claims 17 to 20.
22. The system of claim 21, wherein the system further comprises a second terminal device,
the second terminal device sends scanning information to the network device, wherein the scanning information comprises anchor point information of at least one anchor point scanned in the current area by the second terminal device and a 3D object marked by each anchor point;
and the network equipment receives the scanning information and acquires the corresponding relation between the current area and the anchor point information of at least one anchor point of the current area according to the scanning information.
23. A communication device comprising at least one processor and memory,
the memory for storing the at least one processor provided instructions;
the at least one processor configured to execute the instructions to implement the method of any one of claims 1 to 6, or any one of claims 7 to 10.
24. A computer-readable storage medium having computer program instructions stored therein,
the computer program instructions, when executed, implement the method of any of claims 1 to 6, or any of claims 7 to 10.
CN202110249946.1A 2021-03-08 2021-03-08 Data sending method, receiving method and device Pending CN115119135A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110249946.1A CN115119135A (en) 2021-03-08 2021-03-08 Data sending method, receiving method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110249946.1A CN115119135A (en) 2021-03-08 2021-03-08 Data sending method, receiving method and device

Publications (1)

Publication Number Publication Date
CN115119135A true CN115119135A (en) 2022-09-27

Family

ID=83323510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110249946.1A Pending CN115119135A (en) 2021-03-08 2021-03-08 Data sending method, receiving method and device

Country Status (1)

Country Link
CN (1) CN115119135A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117295008A (en) * 2023-11-24 2023-12-26 荣耀终端有限公司 Information pushing method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117295008A (en) * 2023-11-24 2023-12-26 荣耀终端有限公司 Information pushing method and device
CN117295008B (en) * 2023-11-24 2024-04-05 荣耀终端有限公司 Information pushing method and device

Similar Documents

Publication Publication Date Title
CN111182145A (en) Display method and related product
CN112771900B (en) Data transmission method and electronic equipment
WO2021115007A1 (en) Network switching method and electronic device
CN108711355B (en) Track map strategy making and using method, device and readable storage medium
CN111316333A (en) Information prompting method and electronic equipment
WO2021073448A1 (en) Picture rendering method and device, electronic equipment and storage medium
CN103873750A (en) Wearable imaging sensor for communications
US20220262035A1 (en) Method, apparatus, and system for determining pose
CN112130788A (en) Content sharing method and device
WO2021197071A1 (en) Wireless communication system and method
US20230005277A1 (en) Pose determining method and related device
CN114610193A (en) Content sharing method, electronic device, and storage medium
CN106558088B (en) Method and device for generating GIF file
WO2021197354A1 (en) Device positioning method and relevant apparatus
CN115119135A (en) Data sending method, receiving method and device
WO2024001940A1 (en) Vehicle searching method and apparatus, and electronic device
CN114842069A (en) Pose determination method and related equipment
CN114449090A (en) Data sharing method, device and system and electronic equipment
CN111968199A (en) Picture processing method, terminal device and storage medium
WO2022228059A1 (en) Positioning method and apparatus
WO2020051916A1 (en) Method for transmitting information and electronic device
WO2021164387A1 (en) Early warning method and apparatus for target object, and electronic device
CN116797767A (en) Augmented reality scene sharing method and electronic device
CN114092366A (en) Image processing method, mobile terminal and storage medium
CN112840680A (en) Position information processing method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination