CN114172953A - Cloud navigation method of MR mixed reality scenic spot based on cloud computing - Google Patents

Cloud navigation method of MR mixed reality scenic spot based on cloud computing Download PDF

Info

Publication number
CN114172953A
CN114172953A CN202111367152.1A CN202111367152A CN114172953A CN 114172953 A CN114172953 A CN 114172953A CN 202111367152 A CN202111367152 A CN 202111367152A CN 114172953 A CN114172953 A CN 114172953A
Authority
CN
China
Prior art keywords
performance
cloud
server
user
scenic spot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111367152.1A
Other languages
Chinese (zh)
Inventor
于云沣
周晖
纪芸芸
詹诚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Runma Information Technology Co ltd
Original Assignee
Nanjing Runma Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Runma Information Technology Co ltd filed Critical Nanjing Runma Information Technology Co ltd
Priority to CN202111367152.1A priority Critical patent/CN114172953A/en
Publication of CN114172953A publication Critical patent/CN114172953A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/953Querying, e.g. by the use of web search engines
    • G06F16/9537Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Abstract

The invention discloses a cloud navigation method of an MR mixed reality scenic spot based on cloud computing, which comprises the steps of firstly constructing a cloud navigation system comprising a second server based on cloud computing on the basis of the existing MR glasses navigation system, interacting real-time scene images by a first server and MR glasses, positioning the current position and orientation of a user, judging whether performance is contained or not and transmitting a performance loading instruction to the second server; and processing the selected performance by the second server according to the loading instruction by using a cloud computing method, transmitting the performance to a user client with lower configuration through a network, playing the performance, and displaying pictures and sounds introduced to the scenic spot of the user. By applying the cloud navigation method, the navigation service and the performance resource processing of the scenic spot are transferred to the cloud server, so that the hardware cost of the MR navigation equipment is reduced, and the cloud navigation method can be adapted to a computing unit with lower performance requirements; and the scale of loading software of the equipment is simplified, and the output performance of the equipment is promoted.

Description

Cloud navigation method of MR mixed reality scenic spot based on cloud computing
Technical Field
The invention relates to an optimized tour guide method, in particular to a scenic spot automatic tour guide method based on an MR mixed reality scene of cloud computing.
Background
With the continuous development of material civilization and mental civilization construction, more and more cultural tourism themes are widely put forward and popularized to the majority of users. And after daily busy work and life, people need to utilize holidays to arrange tourism, relax the mood and simultaneously increase the insight and charge themselves. In the whole travel process, no matter the tourist plays water or visits various exhibition halls, the tourist can not guide the introduction of the scenic spots, thereby avoiding the tourist from scratching or missing the cultural characteristics of the important scenic spots.
With the increasing maturity of MR and AR technologies and the increasing proportion of applications in tour guide devices, more and more technical solutions are provided for the positioning and scenic introduction presentation of users during the tour. An existing point cloud map-based MR scenic spot navigation has been produced and popularized, and provides operation and storage capabilities according to a computing unit (including an integrated microprocessor or a smart phone of a user and the like) matched with an MR device, an apk application program (developed by Unity 3D) which is developed and completed is installed on the computing unit, MR glasses are used for moving in a real environment, a current real environment image is acquired through a binocular camera of the MR glasses, a scenic spot server is uploaded to acquire a real-time position and orientation of the current user, and an MR performance of the user at the position is triggered in real time; the user can experience the scenic spot introduction of the 3D virtual and reality fusion. The processor of the smart phone serving as the external computing unit generally requires the performance of the high pass 855 or higher, but the external computing unit needs to be connected with the MR glasses through a data line to realize direct transmission of data signals and guarantee energy supply. The MR glasses as the display device are optical imaging devices, and the playable contents thereof include: 3D models and special effects resources, video resources, audio resources, pictures, etc.
However, the requirements for the graphics processing capability and the computing storage capability of the MR device are high due to the need for users to experience higher fusion between virtual and reality, higher quality, and richer content of visual and auditory performances. Therefore, at present, in actual use, for the 3D model with a high number of triangular surfaces, the 3D particle special effect quantity and other contents with large performance resources such as videos, the image processing and rendering capabilities of the device are required to be high, the energy consumption load of the microprocessor is doubled, the device is scalded and jammed in a short time, and even the problems of flash back and crash can be caused seriously, so that the use experience of the user is poor. If the whole set of MR equipment needs to be used and the performance required to be equipped is high, the cost will rise synchronously. Often, hundreds of MR devices are often involved in a hardware configuration for a scene, significantly increasing the operational and maintenance burden on the scene.
Meanwhile, the data packet body of the application program is too large due to more performance contents and larger resources in a specific scenic spot, so that the data packet body is mainly issued to the outside in a mode that a single scenic spot is taken as an application packet at present, or the loading time of the packet body is too long. If the user wants to experience the MR navigation of multiple scenic spots, a plurality of navigation application packages need to be loaded, and the convenience of use of the user is poor.
Disclosure of Invention
In view of the defects of the prior art, the invention aims to provide a cloud navigation method of an MR mixed reality scenic spot based on cloud computing, which solves the problems of high scenic spot navigation cost and poor visitor perception.
The technical solution of the invention for realizing the above purpose is as follows: the cloud navigation method of the MR mixed reality scenic spot based on cloud computing is characterized by comprising the following steps: the method comprises the steps of constructing a cloud navigation system, wherein the cloud navigation system comprises a second server which participates in image operation and data processing based on cloud computing, a first server which identifies the positioning and orientation of a user and inquires and judges whether the performance is included or not, and MR glasses which take images of the orientation of the user in real time, upload the first server and receive image data from the second server to display performance contents to the user;
the method comprises the steps that a user is positioned, a first server receives a real-time scene image from MR glasses of the user, the current position and direction of the user are identified based on pre-updated point cloud map matching, and whether a user positioning result contains the performance or not is judged according to the preset association between each point position of a scenic spot and the performance;
the method comprises the following steps of performing cloud navigation linkage, receiving a performance loading instruction from a first server by a second server, performing coding processing on audio and video streams of a selected performance by using a cloud computing method, and synchronizing the processed audio and video streams to a client directly associated with a user through an internet communication protocol;
and (4) playing the performance, receiving the audio and video stream from the second server by the MR glasses, and displaying the pictures and the sound of the performance to the user by decoding.
The cloud navigation method provided by the invention has the following remarkable progressiveness: the method transfers the navigation service and performance resource processing of the scenic spot to the cloud server, reduces the hardware cost of the MR navigation equipment, and can be adapted to a computing unit with lower performance requirement; and a plurality of scenic spots can use the same navigation application package, so that the scale of loading software of the equipment is simplified, and the output performance of the equipment is improved.
Drawings
Fig. 1 is a schematic view of a topological structure of a cloud navigation system constructed by the method of the present invention.
Detailed Description
The following detailed description of the embodiments of the present invention is provided to facilitate understanding and understanding of the technical solutions of the present invention, and to clearly define the scope of the present invention.
In view of the fact that the existing MR glasses equipment is applied to the navigation industry and is increasingly optimized in function and requires hardware configuration rising and the contradiction between equipment size, energy consumption and heat dissipation is obvious, the application designer innovatively provides a cloud navigation method of an MR mixed reality scenic spot based on cloud computing by combining technical research experiences of years of intelligent terminals and scene reproduction, and a new technical solution is provided for realizing cloud navigation, optimizing MR glasses equipment hardware configuration and output performance in the scenic spot by decomposing high-load audio and video image operation and data processing parts of MR glasses in a navigation system and performing communication interaction between strong server hardware resources of the cloud computing and a client.
The cloud navigation method of the invention is understood from the technical overview, and mainly comprises the following steps which are related to each other in various aspects: one is to construct a cloud navigation system, which includes a second server participating in image operation and data processing based on cloud computing, a first server recognizing user orientation, orientation and inquiring to determine whether to include a performance, and MR glasses capturing an image of the user orientation in real time, uploading the first server, and receiving image data from the second server to show the content of the performance to the user, as shown in fig. 1. In the preferred embodiment shown, the system also includes a smartphone directly associated with the user as an external aid to the communication and computing unit.
And secondly, positioning the user, namely receiving the real-time scene image from the MR glasses of the user by the first server as shown by an arrow I, identifying the current position and orientation of the user based on the pre-updated point cloud map matching as shown by the arrow II, and judging whether the positioning result of the user contains the performance according to the preset association between each point position of the scenic spot and the performance. In summary, the main function of the first server is to distribute the positioning and performance query during the user's visit.
Thirdly, cloud navigation linkage is adopted as a main innovation of the scheme, and as shown by an arrow, a second server receives a performance loading instruction from a first server; then, as shown by an arrow (IV), encoding the audio and video stream of the selected performance by using a cloud computing method; and finally, synchronizing the processed audio and video stream to the client directly associated with the user through an internet communication protocol as shown by an arrow (c). Therefore, the cloud server with superior configuration performance compared with the MR glasses is used to replace the computing unit configured by the conventional MR glasses, and a plurality of steps are significantly improved in the image and data processing capability of the performance content.
And fourthly, playing the performance, receiving the audio and video stream from the second server by using the MR glasses, and displaying the pictures and the sounds of the performance to the user by decoding. This function is the original basic function of the MR spectacle design, and therefore, the detailed description is omitted.
Based on the above summary, the cloud navigation method provides a new MR mixed reality scenic spot navigation scheme based on the combination of the cloud computing technology and the point cloud map matching identification. The core processing of the cloud navigation is not operated at the navigation client (namely, the MR glasses) but performed in the cloud server, and the cloud navigation video stream/picture is transmitted to the navigation client through the cloud server, and the video stream/picture is only displayed at the navigation client. Therefore, the MR glasses equipment does not need a computing unit with strong image operation and data processing capacity or an external computing unit connected by a direct line, and only needs to have basic streaming media playing capacity and capacity of acquiring user interaction instructions and sending the instructions to a cloud server; the light-end equipment with limited calculation capability can also be used for performing high-quality and multi-content performances, so that the user experience is guided by the MR scenic spot with more complex content, more delicate model and more shocking effect.
The technical definition of the further details of the cloud navigation method also includes the following aspects: firstly, in user positioning, the MR glasses track the movement of a user at intervals of every 1/30 seconds, and carry out field image acquisition and update of feature points through a binocular camera. And the acquired field image is a 3D image with depth under the forward visual angle state of the MR glasses, is similar to the three-dimensional point cloud map, and is beneficial to matching and identifying the two. Because the MR glasses do not directly participate in the matching identification operation of the relative point cloud view, but deliver the auxiliary processing of the first server, the field image acquisition speed for tracking the movement and orientation change of the user can completely meet the performance requirements on the conventional software and hardware basis of the MR glasses; in the aspect of the instant response capability of directly simulating the actual movement of the tourists, the attention tracking capability of manual tour guide on single or group tourists is far exceeded on the solid basis of high-speed communication; the tour guide information playing or switching operation is more accurate.
The constructed point cloud map is a set of three-dimensional map which is acquired along with the sectional transformation of the tour path of the scenic spot, namely the characteristic point acquisition is carried out around the positions and the peripheries of the scenic spots or the scenic scenes, and a group of three-dimensional images with different visual angles are formed; and all three-dimensional images obtained by traversing all scenic spots and scenic scenes in the scenic region and a set which is screened by taking a plurality of positioning feature points as a reference form a basic set of images for performing MR glasses vision real-time positioning. Along with the displacement switching of the tourists among different scenic spots, a matching object of the real-time live images acquired by the MR glasses is a point cloud map which is selected from the matching objects and is closest to the current tourists to observe, namely the point cloud map has the most overlapped or identical characteristic points. Of course, based on the storage capacity of the first server, the point cloud map corresponding to more than two scenic spots can be stored and updated, and adaptive application is performed.
And secondly, the performance for navigation takes audio and video streaming media as a main playing form, more than one nested combination of characters, pictures, three-dimensional models and animation special effects is used as an auxiliary playing form, and the single duration and the set number of the performance are adjusted according to the scenic spot layout of the scenic spot, and are stored and updated in a second server in an iterative manner. And the second server can store all the performance resources which can display scenic spots by using the MR glasses in a large scale, adaptively calls and codes along with the loading instruction of the first server, and synchronously interacts with the MR glasses of corresponding identification codes of the corresponding scenic spots.
Third, the MR glasses described above are not just a show player. In order to embody the man-machine interaction of the navigation equipment and serve as a link for connecting a user with the second server, the navigation equipment also has the functions of responding to an instruction input by the user to perform event coding, directly interacting events with the second server and adjusting performance contents on the basis of hardware of the navigation equipment.
And fourthly, under the condition that the first server judges that no performance exists in the positioning result of the user, the cloud navigation software design further comprises more humanized functions including inquiring a plurality of performances closest to the current position of the user, sending an instruction to the second server, and displaying prompt information of no performance or suggestion information including the distance and the direction of the nearby performance.
The essence of the invention is understood from a more specific cloud guide example, which specifically comprises the following procedures:
firstly, when a user (or a tourist) uses the MR equipment, a binocular camera of the MR equipment is used for obtaining a user peripheral image, the user peripheral image is uploaded to a first server to be matched with an image or a point cloud map with a position mark placed in a cloud server, and the geographic position and the face orientation of the user at present are obtained through matching of positioning feature points.
And then the first server inquires whether the MR performance content is contained in the position according to the current position of the user, and if the MR performance content is contained, a corresponding MR performance loading command is sent to the second server. And if the first server judges that the content of the shows is not contained in the current position of the user, recording the current position of the user, simultaneously inquiring 3 shows closest to the current position of the user, and if no shows exist within 100 m from the user, sending a command to the second server, and displaying prompt information 'no MR shows nearby'. If the performance content exists within 100 meters from the user, displaying 1-3 performance tags closest to the user, sending a command to the second server, and displaying the performance tags in the positions corresponding to the virtual images displayed by the MR glasses, wherein the positions include the distance and the position of the nearby performance from the current position, and the distance and the position are used for reminding the user of the subsequent advancing direction.
Then, when receiving the command, the second server runs the cloud navigation software and loads the corresponding MR performance. Meanwhile, the cloud server acquires audio and video streams of the cloud navigation software through an audio and video acquirer, encodes the audio and video streams through an audio and video encoder, and sends the encoded audio and video streams to a cloud navigation client (namely MR glasses or a smart phone of a user) through internet communication protocols such as a real-time stream transmission protocol.
Finally, after receiving the coded audio and video stream, the cloud navigation client decodes the audio and video stream through an audio and video decoder, and then plays the cloud navigation picture and sound through an audio and video player, so that a user can see the cloud navigation picture and hear the navigation sound on the MR device.
As a supplement, the cloud navigation client may simultaneously monitor an interactive instruction of the user (an input instruction of a click event mainly from the MR control interactive ray, for example, when the user experiences a signing performance, the user clicks a signing button, and audio/video playing of the signing animation and presentation of the signing result will be subsequently triggered), may perform event coding according to the instruction input by the user, and then sends the coded input event to the second server through a custom communication protocol. The second server, upon receiving these encoded input events, decodes them and then reproduces the user's input.
In summary, the scenic spot automatic tour guide method based on three-dimensional point cloud map visual positioning of the present invention has the following characteristics and embodiments, and it can be seen that the method has the prominent substantive features and remarkable progressiveness: the method does not need to add peripheral reconstruction in scenic spots, obtains the current position and orientation information of a user in real time through matching of characteristic points acquired visually and a pre-constructed point cloud map, and correspondingly presents tour guide information meeting triggering conditions through MR glasses; when having reduced scenic spot guide cost input, greatly improved the smoothness nature and the body sense nature of touring process.
In addition to the above embodiments, the present invention may have other embodiments, and any technical solutions formed by equivalent substitutions or equivalent transformations are within the scope of the present invention as claimed.

Claims (9)

1. The cloud navigation method of the MR mixed reality scenic spot based on cloud computing is characterized by comprising the following steps:
the method comprises the steps of constructing a cloud navigation system, wherein the cloud navigation system comprises a second server which participates in image operation and data processing based on cloud computing, a first server which identifies the positioning and orientation of a user and inquires and judges whether the performance is included or not, and MR glasses which take images of the orientation of the user in real time, upload the first server and receive image data from the second server to display performance contents to the user;
the method comprises the steps that a user is positioned, a first server receives a real-time scene image from MR glasses of the user, the current position and direction of the user are identified based on pre-updated point cloud map matching, and whether a user positioning result contains the performance or not is judged according to the preset association between each point position of a scenic spot and the performance;
the method comprises the following steps of performing cloud navigation linkage, receiving a performance loading instruction from a first server by a second server, performing coding processing on audio and video streams of a selected performance by using a cloud computing method, and synchronizing the processed audio and video streams to a client directly associated with a user through an internet communication protocol;
and (4) playing the performance, receiving the audio and video stream from the second server by the MR glasses, and displaying the pictures and the sound of the performance to the user by decoding.
2. The cloud navigation method for the cloud computing-based MR mixed reality scenic spot according to claim 1, characterized in that: in the user positioning, the MR glasses track the user movement and the face orientation, and the real-time scene images are acquired through the camera at frame intervals of 1/30 seconds, wherein the real-time scene images at least comprise two positioning feature points and are updated in real time.
3. The cloud navigation method for the cloud computing-based MR mixed reality scenic spot according to claim 2, characterized in that: the real-time scene image is a 3D image with depth collected under the forward visual angle state of a binocular camera of the MR glasses.
4. The cloud navigation method for the cloud computing-based MR mixed reality scenic spot according to claim 1, characterized in that: the point cloud map in the first server is a set obtained by filtering images obtained by traversing all point locations of a scenic spot and collecting more than one visual angle, and each filtered and reserved image comprises a plurality of positioning feature points for matching and identification.
5. The cloud navigation method for the MR mixed reality scenic spot based on the cloud computing as claimed in claim 4, wherein: the first server stores and updates a point cloud map corresponding to more than two scenic spots and performs adaptive application.
6. The cloud navigation method for the cloud computing-based MR mixed reality scenic spot according to claim 1, characterized in that: the performance takes audio and video stream as a main playing form, is assisted by more than one nested combination of characters, pictures, stereo models and animation special effects, and the single duration and the set number of the performance are adjusted according to the scenic spot layout of the scenic spot, and are stored and updated in the second server in an iterative manner.
7. The cloud navigation method for the cloud computing-based MR mixed reality scenic spot according to claim 6, wherein: and the second server stores all the performance resources which can be used for showing scenic spots by using the MR glasses in a large scale, performs adaptive scheduling and coding processing along with the loading instruction of the first server, and synchronously interacts with the MR glasses.
8. The cloud navigation method for the cloud computing-based MR mixed reality scenic spot according to claim 1, characterized in that: and the MR glasses respond to the instruction input by the user to carry out event coding, and directly interact with the second server to carry out events and adjust the performance content.
9. The cloud navigation method for the cloud computing-based MR mixed reality scenic spot according to claim 1, characterized in that: and under the condition that the first server judges that the positioning result of the user has no performance, inquiring a plurality of performances closest to the current position of the user, sending an instruction to the second server, and displaying prompt information of no performance or suggestion information containing the distance and the direction of the nearby performance.
CN202111367152.1A 2021-11-18 2021-11-18 Cloud navigation method of MR mixed reality scenic spot based on cloud computing Withdrawn CN114172953A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111367152.1A CN114172953A (en) 2021-11-18 2021-11-18 Cloud navigation method of MR mixed reality scenic spot based on cloud computing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111367152.1A CN114172953A (en) 2021-11-18 2021-11-18 Cloud navigation method of MR mixed reality scenic spot based on cloud computing

Publications (1)

Publication Number Publication Date
CN114172953A true CN114172953A (en) 2022-03-11

Family

ID=80479576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111367152.1A Withdrawn CN114172953A (en) 2021-11-18 2021-11-18 Cloud navigation method of MR mixed reality scenic spot based on cloud computing

Country Status (1)

Country Link
CN (1) CN114172953A (en)

Similar Documents

Publication Publication Date Title
CN112562433B (en) Working method of 5G strong interaction remote delivery teaching system based on holographic terminal
US11700286B2 (en) Multiuser asymmetric immersive teleconferencing with synthesized audio-visual feed
JP7135141B2 (en) Information processing system, information processing method, and information processing program
US20200112625A1 (en) Adaptive streaming of virtual reality data
US9704298B2 (en) Systems and methods for generating 360 degree mixed reality environments
JP2020036334A (en) Control of personal space content presented by head mount display
CN106576158A (en) Immersive video
CN108632674A (en) A kind of playback method and client of panoramic video
JP2021506189A (en) Methods and devices for sending and receiving metadata about the dynamic viewpoint coordinate system
CN106792214A (en) A kind of living broadcast interactive method and system based on digital audio-video place
CN107924587A (en) Object is directed the user in mixed reality session
CN106792228A (en) A kind of living broadcast interactive method and system
CN114401414B (en) Information display method and system for immersive live broadcast and information pushing method
CN112492231B (en) Remote interaction method, device, electronic equipment and computer readable storage medium
US11656682B2 (en) Methods and systems for providing an immersive virtual reality experience
CN112219403B (en) Rendering perspective metrics for immersive media
CN113891117B (en) Immersion medium data processing method, device, equipment and readable storage medium
CN108810600A (en) A kind of switching method of video scene, client and server
KR20190031220A (en) System and method for providing virtual reality content
JP2020187706A (en) Image processing device, image processing system, image processing method, and program
CN112492323B (en) Live broadcast mask generation method, readable storage medium and computer equipment
KR102140077B1 (en) Master device, slave device and control method thereof
CN107925657A (en) Via the asynchronous session of user equipment
CN114172953A (en) Cloud navigation method of MR mixed reality scenic spot based on cloud computing
KR101922970B1 (en) Live streaming method for virtual reality contents and system thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20220311