WO2022089100A1 - 视频透视方法、装置、系统、电子设备及存储介质 - Google Patents

视频透视方法、装置、系统、电子设备及存储介质 Download PDF

Info

Publication number
WO2022089100A1
WO2022089100A1 PCT/CN2021/119608 CN2021119608W WO2022089100A1 WO 2022089100 A1 WO2022089100 A1 WO 2022089100A1 CN 2021119608 W CN2021119608 W CN 2021119608W WO 2022089100 A1 WO2022089100 A1 WO 2022089100A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
virtual
real
real image
virtual image
Prior art date
Application number
PCT/CN2021/119608
Other languages
English (en)
French (fr)
Inventor
梁天鹰
赖武军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022089100A1 publication Critical patent/WO2022089100A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/293Generating mixed stereoscopic images; Generating mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking

Definitions

  • the embodiments of the present application relate to the field of electronic devices, and in particular, to a video perspective method, apparatus, system, electronic device, and storage medium.
  • Video perspective technology refers to a technology that captures real images of the real world through cameras (or camera modules), generates virtual images based on real images, and then combines the virtual images and real images for display.
  • video see-through technology can be applied to virtual reality headsets, which also endow the virtual reality headset with augmented reality (AR) functions.
  • AR augmented reality
  • a video see-through device such as a head-mounted device
  • a time misalignment between the user and reality, that is, the synthetic image seen by the user's eyes will be compared with the real scene. larger delay.
  • the larger the delay the more obvious the time misalignment phenomenon will be.
  • the brain may already perceive that the hand has touched the object, but the eyes cannot see it after a certain delay.
  • Embodiments of the present application provide a video perspective method, device, system, electronic device, and storage medium, which can reduce the process of video perspective from acquiring a real image to displaying a composite image by rendering a real image and a virtual object separately and then synthesizing it. the overall delay.
  • an embodiment of the present application provides a video perspective method, the method includes: acquiring a real image corresponding to a real world scene and a virtual image including a virtual object in parallel; A first image, the first image is a real image or a composite image of a real image and a virtual image; the first image is displayed.
  • the method can realize the separate rendering of the real image and the virtual object and then combine them, thereby reducing the time from acquiring the real image to displaying the combined image in the video perspective process. the overall delay.
  • determining the first image according to the real image and the acquisition result of the virtual image includes: for each frame of real image, if the virtual image has not been acquired after the real image is acquired, Then, after the virtual image is acquired, the real image and the virtual image are combined to obtain the combined image as the first image.
  • the determining the first image according to the real image and the acquisition result of the virtual image includes: directly determining the real image as the first image for the real image acquired before the virtual image is acquired; For the real image obtained after the virtual image is obtained, the real image and the virtual image are synthesized to obtain the synthesized image as the first image.
  • the real image obtained before the virtual image is obtained, when it is directly determined that the real image is the first image, the real image can be displayed directly, which can reduce the problem of no screen or blank screen on the display when the video perspective system is initially started. duration.
  • synthesizing the real image and the virtual image to obtain a composite image includes: adjusting the real image and the virtual image to a first size; identifying a valid pixel in the virtual image as 1, and identifying an invalid pixel as 0, The mask image corresponding to the virtual image is obtained; wherein, the effective pixels are the pixels occupied by the virtual object in the virtual image, and the invalid pixels are the pixels other than the effective pixels in the virtual image; Synthesis is performed to obtain a composite image.
  • an embodiment of the present application provides a video see-through device, which can be used to implement the method described in the first aspect above.
  • the functions of the apparatus may be implemented by hardware, or by executing corresponding software by hardware.
  • the hardware or software includes one or more modules or units corresponding to the above functions, for example, an acquisition unit, a synthesis unit, a display unit, and the like.
  • the acquisition unit is used to acquire the real image corresponding to the real world scene and the virtual image including the virtual object in parallel;
  • the synthesis unit is used to determine the first image according to the acquisition result of the real image and the virtual image, and the first image is A real image or a composite image of a real image and a virtual image;
  • a display unit for displaying the first image.
  • the synthesizing unit is specifically used for each frame of real image. If the virtual image has not been acquired after the real image is acquired, after the virtual image is acquired, the real image and the virtual image are processed. A composite image is obtained as the first image. If the virtual image has not been acquired after the real image is acquired, the real image and the virtual image are directly synthesized to obtain the synthesized image as the first image.
  • the synthesis unit is specifically configured to directly determine the real image as the first image for the real image acquired before the virtual image is acquired; for the real image acquired after the virtual image is acquired, The image and the virtual image are combined to obtain the combined image as the first image.
  • the synthesizing unit is specifically configured to adjust the real image and the virtual image to a first size; identify the valid pixels in the virtual image as 1, and identify the invalid pixels as 0, and obtain a mask image corresponding to the virtual image; wherein, The effective pixels are the pixels occupied by the virtual object in the virtual image, and the invalid pixels are the pixels other than the effective pixels in the virtual image; according to the mask image, the real image and the virtual image are synthesized to obtain a synthesized image.
  • the first size may be 848*480, 300*150, etc.
  • the size of the first size may be adjusted according to display requirements, virtual images and/or real images, etc., which is not limited in this application.
  • an embodiment of the present application provides a video perspective system, including: a camera module, a central processing unit, a graphics processor, an image synthesis chip, and a display; a camera module for capturing a real image corresponding to a real-world scene, and directly sent to the image synthesis chip; the central processing unit and the graphics processor are used to generate virtual images containing virtual objects and send them to the image synthesis chip; the image synthesis chip is used to obtain real images and virtual images in parallel; and the acquisition result of the virtual image, determine the first image and send it to the display, where the first image is a real image or a composite image of the real image and the virtual image; the display is used for displaying the first image.
  • the algorithm for synthesizing the virtual image and the real image is hardened in the image synthesis chip, which can reduce the operation delay when synthesizing the virtual image and the real image.
  • an embodiment of the present application provides an electronic device, where the electronic device may be a video see-through device, such as a video see-through head-mounted device, video see-through glasses, and the like.
  • the electronic device includes: a processor, a memory for storing instructions executable by the processor; when the processor is configured to execute the instructions, the electronic device causes the electronic device to implement the method according to the first aspect.
  • embodiments of the present application provide a computer-readable storage medium on which computer program instructions are stored; when the computer program instructions are executed by an electronic device, the electronic device is made to implement the method described in the first aspect.
  • an embodiment of the present application provides a computer program product, including computer-readable code, which, when the computer-readable code is executed in an electronic device, enables the electronic device to implement the method described in the foregoing first aspect.
  • Fig. 1 shows the principle schematic diagram of a kind of video perspective
  • FIG. 2 shows a schematic structural diagram of a video perspective system provided by an embodiment of the present application
  • FIG. 3 shows a schematic diagram of a virtual image provided by an embodiment of the present application
  • FIG. 4 shows a schematic diagram of a real image provided by an embodiment of the present application
  • FIG. 5 shows a schematic diagram of a composite image provided by an embodiment of the present application
  • FIG. 6 shows a schematic structural diagram of a video see-through device provided by an embodiment of the present application
  • FIG. 7 shows another schematic structural diagram of a video see-through system provided by an embodiment of the present application.
  • video see through technology based on camera modules has gradually become a mainstream technology, with a wide range of application scenarios such as seeing the outside world, electronic fences, and MR applications.
  • video see-through technology can be applied to virtual reality headsets, which also endow the virtual reality headset with augmented reality (AR) functions.
  • AR augmented reality
  • the video perspective technology refers to the technology of capturing the real image of the real world through a camera (or called a camera module), generating a virtual image according to the real image, and then synthesizing the virtual image and the real image for display.
  • FIG. 1 shows a schematic diagram of a video perspective.
  • the camera module can capture the real world scene, obtain the real image (or video stream) corresponding to the real world scene, and transmit it to the intermediate processing module.
  • the intermediate processing module may include: a simultaneous localization and mapping (SLAM) module, a plane detection module, a virtual object generation module, and a virtual reality synthesis module.
  • SLAM simultaneous localization and mapping
  • the SLAM module can locate according to the environment where the camera module and other sensors are located, and at the same time draw the environment structure according to the real image.
  • Other sensors can include gyroscopes, accelerometers, infrared sensors, etc.
  • the SLAM module can obtain the rotation, translation and other pose information collected by the gyroscope and draw the environment structure.
  • the plane detection module can detect which planes are in the real image, such as: desktop, ground, etc.
  • the virtual object generation module can combine the processing results of the SLAM module and the plane detection module to generate virtual objects, obtain virtual images containing virtual objects, and transmit them to the virtual reality synthesis module.
  • the virtual reality synthesis module can synthesize the virtual image output by the virtual object generation module and the real image captured by the camera module to obtain the synthesized image and transmit it to the display module.
  • the display module can display the composite image, for example, in front of the human eye through a display.
  • the step of acquiring the real image by the camera module takes time t 0
  • the step of transmitting the real image to the intermediate processing module by the camera module The time-consuming is t 1
  • the processing step of the SLAM module is time-consuming t 2
  • the processing step of the plane detection module is time-consuming t 3
  • the processing step of the virtual object generation module is time-consuming t 4
  • the virtual reality synthesis module obtains the synthetic image.
  • the processing step takes t 5
  • the virtual reality synthesis module transmits the synthesized image to the display module takes t 6
  • the display module displays the synthesized image takes t 7
  • the overall time delay of the video perspective process T total is as follows:
  • the overall delay T total can also be expressed as:
  • T total t 0 +t 1 +t 2 +t 3 +t 4 +t 5 +t 6 +t 7 .
  • the overall time delay of the video perspective may be further increased, which is greater than the value of T total above.
  • the embodiments of the present application provide a video perspective system, which can reduce the time from acquiring a real image to displaying a composite image in the process of video perspective by rendering the real image and the virtual object separately and then synthesizing it. the overall delay.
  • the embodiments of the present application will be exemplarily described below with reference to the accompanying drawings.
  • FIG. 2 shows a schematic structural diagram of a video perspective system provided by an embodiment of the present application.
  • the video perspective system may include: a camera module, an intermediate processing module, a display module, and other sensors.
  • the camera module can capture the real-world scene, obtain the real image corresponding to the real-world scene, and transmit it to the intermediate processing module.
  • the intermediate processing module may include: a SLAM module, a plane detection module, a virtual object generation module, and a virtual reality synthesis module, and each module in the intermediate processing module may implement the same functions as the foregoing embodiments.
  • the camera module is also called a camera (camera) imaging module, and may specifically include a lens (lens), a filter, an image sensor (sensor), an image signal processor (image signal processor, ISP), etc. Repeat them one by one.
  • the SLAM module and the plane detection module may be implemented on a central processing unit (central processing unit, CPU).
  • the virtual object generation module can be implemented on a graphics processor (graphics processing unit, GPU).
  • the virtual reality synthesis module can be a separate chip for realizing the function of synthesizing virtual images and real images. This application is not limited here.
  • the display module can be a display, which can display the composite image.
  • the display module may be a display on a video see-through head-mounted device.
  • the multiple modules shown in FIG. 2 can be integrated into one device, such as a video see-through head-mounted device. Alternatively, it can also be deployed on multiple devices to form a video perspective system.
  • the camera module may be a web camera connected to the Internet, some separate image capturing devices (eg, cameras), and the like.
  • the camera module can be connected with a personal computer (PC) or a mobile phone, and the real images collected can be sent to the PC or mobile phone.
  • the CPU and GPU in the PC or mobile phone are used as algorithm processing devices to realize the function of the above-mentioned intermediate processing module, and the display screen of the PC or mobile phone realizes the function of the above-mentioned display module. This application also does not limit this.
  • the camera module after the camera module obtains the real image, it will directly send the real image to the virtual reality synthesis module.
  • the SLAM module, the plane detection module, and the virtual object generation module process in turn, and after obtaining the virtual image, the virtual object generation module sends the virtual image to the virtual reality synthesis module.
  • the virtual reality synthesis module can synthesize the received virtual image and the real image to obtain a synthesized image, and transmit the synthesized image to the display module for display.
  • the steps of the camera module to obtain the real image, the processing steps of the SLAM module, the processing steps of the plane detection module, the processing steps of the virtual object generation module, and the display module to display the composite image are steps.
  • the steps are the same as the above-mentioned video perspective process shown in FIG. 1 .
  • the time required for the camera module to obtain the real image is also t 0
  • the processing step of the SLAM module is also time consuming t 2
  • the processing step of the plane detection module is also time consuming t 3.
  • the processing step of the virtual object generation module also takes t 4 , and the step of displaying the composite image by the display module takes t 7 .
  • the time required for the camera module to transmit the real image to the virtual reality synthesis module is t 1_new
  • the processing step for the virtual reality synthesis module to obtain the synthesized image takes t 5_new
  • the virtual reality synthesis module transmits the synthesized image to the display module.
  • the step time is t 6_new .
  • the overall time delay T_total of the video perspective process is as follows:
  • T _total t 0 +t 1_new +t 5_new +t 6_new +t 7 .
  • the processing steps of the SLAM module in the intermediate processing module, the processing steps of the plane detection module, and the processing steps of the virtual object generation module are performed in parallel with the step of transmitting the real image to the virtual reality synthesis module by the camera module and the processing step of obtaining the synthesized image by the virtual reality synthesis module. That is, the virtual image and the real image are rendered separately and then synthesized, and the real image that needs to be synthesized by the virtual reality synthesis module will not pass through intermediate processing modules such as the SLAM module, the plane detection module, and the virtual object generation module.
  • the overall time delay T_total of the video perspective process in the embodiment of the present application is much smaller than the overall time delay T total of the existing video perspective process. Therefore, the embodiments of the present application can effectively reduce the overall system delay of the video see-through system, and greatly reduce the negative impact caused by the phenomenon of "time misalignment" between users and reality.
  • the processing of the virtual reality synthesis module includes the following two scenarios.
  • Scenario 1 The virtual reality synthesis module receives the real image from the camera module and the virtual image from the virtual object generation module.
  • Scenario 2 The virtual reality synthesis module receives the real image from the camera module, but does not receive the virtual image from the virtual object generation module.
  • the virtual reality synthesis module can synthesize the real image and the virtual image, and then send the synthesized image to the display module for display.
  • the virtual reality synthesis module can perform operations similar to scenario 1 after receiving the virtual image.
  • the first moment may be the moment when the start switch of the virtual reality helmet is turned on
  • the first frame of the real image of the real scene is obtained through the camera module and send it to the virtual reality synthesis module.
  • the virtual object generation module has not yet generated a virtual image containing virtual objects.
  • the virtual reality synthesizing module can determine whether the virtual image is received when receiving the first frame of the real image, and if no virtual image is received, the virtual reality synthesizing module can wait for the virtual image.
  • the virtual reality synthesis module will compose the first frame of the real image to the kth frame of the real image with the virtual image respectively, and the synthesized images will follow the order of the corresponding real images. Sent to the display module for display.
  • the virtual reality synthesis module receives the k+1 frame real image, the k+2 frame real image, the k+3 frame real image, etc., it adopts the same processing method as the k frame real image.
  • k is an integer greater than 1, which can be 2, 3, 5, 8, 10, etc., without limitation.
  • the virtual reality synthesis module can synthesize the real image and the virtual image, and then send the synthesized image to the display module for display.
  • the virtual reality synthesis module will directly send the real image to the display module for display.
  • the first moment may be the moment when the start switch of the virtual reality helmet is turned on
  • the first frame of the real image of the real scene is obtained through the camera module and send it to the virtual reality synthesis module.
  • the virtual object generation module has not yet generated a virtual image containing virtual objects.
  • the virtual reality synthesis module can determine whether the virtual image is received when receiving the first frame of the real image. Since no virtual image is received, the virtual reality synthesis module will directly send the first frame of the real image to the display module for display. .
  • the virtual reality synthesis module when the virtual reality synthesis module subsequently receives the second frame of real image, the third frame of real image, the fourth frame of real image, etc., if no virtual image is received, it will use the same processing as the first frame of real image. Way. If the virtual reality synthesis module determines that a virtual image is received when receiving the kth frame of the real image, the virtual reality synthesis module can synthesize the kth frame of the real image and the virtual image, and send the synthesized image to the display module for display . Similarly, when the virtual reality synthesis module receives the k+1 frame real image, the k+2 frame real image, the k+3 frame real image, etc., it adopts the same processing method as the k frame real image. Wherein, k is an integer greater than 1, which can be 2, 3, 5, 8, 10, etc., without limitation.
  • the processing method of the virtual object generation module for the first frame of real images to the k-1th real image is consistent with the above-mentioned scenario 2, and the virtual object generation module processes the kth frame of the real image to the kth frame after each frame.
  • Frame real images are processed in a manner consistent with Scenario 1 above.
  • the virtual reality synthesis module when the virtual reality synthesis module receives the real image from the camera module, but does not receive the virtual image from the virtual object generation module, it directly sends the real image to the display module for display, which can reduce the need for the video perspective system.
  • this period of time is the interval duration from the first frame of real image to the kth frame of real image described in the exemplary description of the previous design.
  • the display when the user uses the video see-through head-mounted device, when the display of the video see-through head-mounted device is just turned on, the display will immediately display the real scene captured in real time, and there will be no screenless phenomenon, so that it can be Optimize the user experience.
  • FIG. 3 shows a schematic diagram of a virtual image provided by an embodiment of the present application
  • FIG. 4 shows a schematic diagram of a real image provided by an embodiment of the present application
  • FIG. 5 shows a schematic diagram of a virtual image provided by an embodiment of the present application. Schematic diagram of the composite image.
  • Figure 3 shows a virtual image including virtual objects generated by the virtual object generation module, wherein the unfilled annular blank area represents the virtual object, and the area occupied by the virtual object in the virtual image is a valid pixel, and the area filled with diagonal lines is an invalid pixel.
  • Figure 4 shows a real image of a real scene obtained by the camera module.
  • the virtual reality synthesis module receives the virtual image shown in FIG. 3 and the real image shown in FIG. 4 , it can remove the invalid pixels in the virtual image shown in FIG. 3 , and remove the invalid pixels from the virtual image ( At this time, only the effective pixels in the region where the virtual object is located) are synthesized with the real image shown in FIG. 4 to obtain the synthesized image as shown in FIG. 5 . Then, the virtual reality synthesis module can send the synthesized image shown in FIG. 5 to the display module for display.
  • the steps of synthesizing the virtual image and the real image by the virtual reality synthesizing module may refer to the following steps 1) to 3).
  • M*N may be 848*480, 300*150, etc.
  • the sizes of M and N may be adjusted according to display requirements, virtual images and/or real images, etc.
  • the size of M and N is not limited in this application.
  • the virtual reality synthesis module can synthesize the virtual image and the real image to obtain a synthetic image, and the synthetic image will contain virtual objects.
  • an algorithm for realizing the function of the virtual reality synthesis module may be hardened in a chip, and the chip is used as the virtual reality synthesis module to reduce The computational delay of the virtual reality synthesis module.
  • a mobile industry processor interface-camera serial interface (mobile industry processor interface-camera serial interface, MIPI-CSI) protocol may be used through a chip for realizing the function of the virtual reality synthesis module.
  • Mobile industry processor interface-display serial interface (mobile industry processor interface-display serial interface, MIPI-DSI) protocol and other higher bandwidth communication protocols, send images (real images or synthetic images) to the display module for display, In order to reduce the transmission delay, further reduce the overall system delay of the video perspective system, and reduce the negative impact caused by the phenomenon of "time misalignment" between people and reality.
  • the embodiments of the present application further provide a video perspective method, which can be applied to the video perspective system.
  • the execution body of the method may be a virtual reality synthesis module in a video perspective system, or a certain chip having the function of a virtual reality synthesis module.
  • the video perspective method includes: acquiring a real image corresponding to a real world scene and a virtual image including a virtual object in parallel; determining a first image according to the real image and the acquisition result of the virtual image, where the first image is the real image or the real image and the real image Composite image of virtual images; displaying the first image.
  • determining the first image according to the real image and the acquisition result of the virtual image includes: for each frame of real image, if the virtual image has not been acquired after the real image is acquired image, after the virtual image is acquired, the real image and the virtual image are synthesized to obtain the synthesized image as the first image.
  • the real image and the virtual image are directly synthesized to obtain the synthesized image as the first image.
  • determining the first image according to the real image and the acquisition result of the virtual image includes: directly determining the real image as the first image for the real image acquired before the virtual image is acquired an image; for the real image obtained after the virtual image is obtained, the real image and the virtual image are synthesized to obtain the synthesized image as the first image.
  • the SLAM module, the plane detection module, etc. in the intermediate processing module described in the foregoing embodiments of the present application can also be replaced with modules corresponding to other algorithms capable of implementing the same function, such as: deep learning algorithms, which are not limited here. .
  • FIG. 6 shows a schematic structural diagram of a video see-through device provided by an embodiment of the present application.
  • the video perspective apparatus may include an acquisition unit 601 , a synthesis unit 602 , and a display unit 603 .
  • the acquiring unit 601 is used to acquire the real image corresponding to the real world scene and the virtual image including the virtual object in parallel; the synthesizing unit 602 is used to determine the first image according to the acquisition result of the real image and the virtual image.
  • the image is a real image or a composite image of the real image and the virtual image; the display unit 603 is configured to display the first image.
  • the acquiring unit 601 may acquire the real image corresponding to the real world scene captured by the camera module, and acquire the virtual image generated by the virtual object generating module in parallel.
  • the display unit 603 may send the first image to the display for display, or the display unit 603 itself may be a display or the like.
  • the synthesizing unit 602 is specifically configured to, for each frame of real image, if the virtual image has not been acquired after the real image is acquired, wait until the virtual image is acquired, and then combine the real image and the virtual image. The images are combined to obtain a combined image as the first image. If the virtual image has not been acquired after the real image is acquired, the real image and the virtual image are directly synthesized to obtain the synthesized image as the first image.
  • the synthesizing unit 602 is specifically configured to directly determine the real image as the first image for the real image acquired before the virtual image is acquired; for the real image acquired after the virtual image is acquired, The real image and the virtual image are combined to obtain a combined image as the first image.
  • the synthesizing unit 602 is specifically configured to adjust the real image and the virtual image to a first size; identify valid pixels in the virtual image as 1, and invalid pixels as 0, and obtain a mask image corresponding to the virtual image; wherein , the effective pixel is the pixel occupied by the virtual object in the virtual image, and the invalid pixel is the pixel other than the effective pixel in the virtual image; according to the mask image, the real image and the virtual image are synthesized to obtain a synthesized image.
  • units in the above apparatus is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • all the units in the device can be realized in the form of software calling through the processing element; also can all be realized in the form of hardware; some units can also be realized in the form of software calling through the processing element, and some units can be realized in the form of hardware.
  • each unit can be a separately established processing element, or can be integrated in a certain chip of the device to be implemented, and can also be stored in the memory in the form of a program, which can be called by a certain processing element of the device and execute the unit's processing.
  • All or part of these units can be integrated together, and can also be implemented independently.
  • the processing element described here may also be called a processor, which may be an integrated circuit with signal processing capability.
  • each step of the above method or each of the above units can be implemented by an integrated logic circuit of hardware in the processor element or implemented in the form of software being invoked by the processing element.
  • the units in the above apparatus may be one or more integrated circuits configured to implement the above method, eg, one or more application specific integrated circuits (ASICs), or, one or more A digital signal processor (DSP), or, one or more field programmable gate arrays (FPGA), or a combination of at least two of these integrated circuit forms.
  • ASICs application specific integrated circuits
  • DSP digital signal processor
  • FPGA field programmable gate arrays
  • the processing element can be a general-purpose processor, such as a CPU or other processors that can invoke programs.
  • these units can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • the unit of the above apparatus for implementing each corresponding step in the above method may be implemented in the form of a processing element scheduler.
  • the apparatus may include a processing element and a storage element, and the processing element invokes a program stored in the storage element to execute the method described in the above method embodiments.
  • the storage element may be a storage element on the same chip as the processing element, ie, an on-chip storage element.
  • the program for performing the above method may be in a storage element on a different chip from the processing element, ie, an off-chip storage element.
  • the processing element calls or loads the program from the off-chip storage element to the on-chip storage element, so as to call and execute the methods described in the above method embodiments.
  • an embodiment of the present application may further provide an apparatus, such as an electronic device, which may include a processor, a memory for storing instructions executable by the processor.
  • an electronic device which may include a processor, a memory for storing instructions executable by the processor.
  • the electronic device enables the electronic device to implement the method described in the foregoing embodiments.
  • the electronic device may be the video see-through head-mounted device described in the previous embodiments.
  • the memory may be located within the electronic device or external to the electronic device.
  • the processor includes one or more.
  • the unit of the apparatus implementing each step in the above method may be configured as one or more processing elements, where the processing elements may be integrated circuits, such as: one or more ASICs, or, one or more Multiple DSPs, or, one or more FPGAs, or a combination of these types of integrated circuits. These integrated circuits can be integrated together to form chips.
  • an embodiment of the present application further provides a chip, which can be applied to the above-mentioned electronic device.
  • the chip includes one or more interface circuits and one or more processors; the interface circuit and the processor are interconnected by lines; the processor receives and executes computer instructions from the memory of the electronic device through the interface circuit, so as to realize the above method embodiments. Methods.
  • Embodiments of the present application further provide a computer program product, including computer-readable codes, which, when the computer-readable codes are executed in an electronic device, enable the electronic device to implement the methods described in the foregoing embodiments.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be Incorporation may either be integrated into another device, or some features may be omitted, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some interfaces, indirect coupling or communication connection of devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may be one physical unit or multiple physical units, that is, they may be located in one place, or may be distributed to multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units may be implemented in the form of hardware, or may be implemented in the form of software functional units.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the software product is stored in a program product, such as a computer-readable storage medium, and includes several instructions for causing a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) to execute all of the methods described in the various embodiments of the present application or partial steps.
  • the aforementioned storage medium includes: a U disk, a removable hard disk, a ROM, a RAM, a magnetic disk, or an optical disk, and other media that can store program codes.
  • the embodiments of the present application may further provide a computer-readable storage medium on which computer program instructions are stored.
  • the computer program instructions When executed by the electronic device, the computer program instructions cause the electronic device to implement the methods described in the foregoing method embodiments.
  • FIG. 7 shows another schematic structural diagram of a video see-through system provided by an embodiment of the present application.
  • the video perspective system includes: a camera module 701, a central processing unit 702, a graphics processor 703, an image synthesis chip 704, and a display 705;
  • the camera module 701 is used to capture real images corresponding to real world scenes , and directly sent to the image synthesis chip 704;
  • the central processing unit 702 and the graphics processor 703 are used to generate virtual images containing virtual objects, and send them to the image synthesis chip 704;
  • the image synthesis chip 704 is used for parallel acquisition of real images and virtual images and according to the acquisition result of the real image and the virtual image, determine the first image and send it to the display 705, where the first image is a real image or a composite image of the real image and the virtual image;
  • the display 705 is used to display the first image.
  • the central processing unit 702 can realize the functions that the SLAM module and the plane detection module described in the foregoing embodiments can realize, and the graphics processor 703 can realize the functions that can be realized as in the foregoing embodiments.
  • the image synthesis chip 704 can implement the functions that can be implemented by the virtual reality synthesis module described in the foregoing embodiments.
  • the video see-through system further includes other sensors such as infrared sensors, gyroscopes, etc., which are not shown in FIG. 7 .
  • the algorithm for synthesizing the virtual image and the real image is hardened in the image synthesis chip, which can reduce the operation delay when synthesizing the virtual image and the real image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

本申请提供一种视频透视方法、装置、系统、电子设备及存储介质,涉及电子设备领域。其中,该方法包括:并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像;根据真实图像、以及虚拟图像的获取结果,确定第一图像,第一图像为真实图像或真实图像与虚拟图像的合成图像;显示第一图像。该方法通过并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像,可以实现对真实图像和虚拟物体分开渲染后进行合成,进而可以降低视频透视过程中从获取真实图像到显示合成图像的整体时延。

Description

视频透视方法、装置、系统、电子设备及存储介质
本申请要求于2020年10月31日提交国家知识产权局、申请号为202011198831.6、申请名称为“视频透视方法、装置、系统、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及电子设备领域,尤其涉及一种视频透视方法、装置、系统、电子设备及存储介质。
背景技术
视频透视技术是指:通过摄像头(或称为摄像头模组)捕获真实世界的真实图像,根据真实图像生成虚拟图像,然后将虚拟图像和真实图像合成后进行显示的技术。例如,视频透视技术可以应用于虚拟现实头盔上,为虚拟现实头盔同时赋予增强现实(augmented reality,AR)的功能。
目前,当用户使用视频透视设备(如:头戴式设备)时,用户和现实之间会存在“时间错位”现象,即,用户眼中看到的合成图像相较于真实场景而言,会存在较大的时延。这种时延越大,时间错位现象会越明显。例如,用户佩视频透视头戴式设备时,若用手去拿某个物体,则大脑可能已经感知到手碰触到了该物体,但眼睛却经过一定的时延后才能看到。
发明内容
本申请实施例提供一种视频透视方法、装置、系统、电子设备及存储介质,可以通过将真实图像和虚拟物体分开渲染后进行合成的方式,降低视频透视过程中从获取真实图像到显示合成图像的整体时延。
第一方面,本申请实施例提供一种视频透视方法,所述方法包括:并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像;根据真实图像、以及虚拟图像的获取结果,确定第一图像,第一图像为真实图像或真实图像与虚拟图像的合成图像;显示第一图像。
该方法通过并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像,可以实现对真实图像和虚拟物体分开渲染后进行合成,进而可以降低视频透视过程中从获取真实图像到显示合成图像的整体时延。
在一种可能的设计中,所述根据真实图像、以及虚拟图像的获取结果,确定第一图像,包括:对每一帧真实图像,若在获取到真实图像后,还未获取到虚拟图像,则等到获取到虚拟图像后,对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
可以理解的,若在获取到真实图像后,已未获取到虚拟图像,则直接对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
在另外一种可能的设计中,所述根据真实图像、以及虚拟图像的获取结果,确定第一图像,包括:对获取到虚拟图像之前获取到的真实图像,直接确定真实图像为第一图像;对获取到虚拟图像之后获取到的真实图像,对真实图像和虚拟图像进行合成,得到 合成图像作为第一图像。
本设计中,对获取到虚拟图像之前获取到的真实图像,直接确定真实图像为第一图像时,可以直接将真实图像进行显示,能够减少视频透视系统在初始启动时显示器无画面或空白画面的时长。
可选地,所述对真实图像和虚拟图像进行合成,得到合成图像,包括:将真实图像和虚拟图像调整为第一尺寸;将虚拟图像中的有效像素识别为1,无效像素识别为0,得到虚拟图像对应的掩码图像;其中,有效像素是虚拟物体在虚拟图像中所占的像素,无效像素是虚拟图像中除有效像素之外的像素;根据掩码图像,对真实图像和虚拟图像进行合成,得到合成图像。
第二方面,本申请实施例提供一种视频透视装置,该装置可以用于实现上述第一方面所述的方法。该装置的功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块或单元,例如,获取单元、合成单元、显示单元等。
其中,获取单元,用于并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像;合成单元,用于根据真实图像、以及虚拟图像的获取结果,确定第一图像,第一图像为真实图像或真实图像与虚拟图像的合成图像;显示单元,用于显示第一图像。
在一种可能的设计中,合成单元,具体用于对每一帧真实图像,若在获取到真实图像后,还未获取到虚拟图像,则等到获取到虚拟图像后,对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。若在获取到真实图像后,已未获取到虚拟图像,则直接对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
在另外一种可能的设计中,合成单元,具体用于对获取到虚拟图像之前获取到的真实图像,直接确定真实图像为第一图像;对获取到虚拟图像之后获取到的真实图像,对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
可选地,合成单元,具体用于将真实图像和虚拟图像调整为第一尺寸;将虚拟图像中的有效像素识别为1,无效像素识别为0,得到虚拟图像对应的掩码图像;其中,有效像素是虚拟物体在虚拟图像中所占的像素,无效像素是虚拟图像中除有效像素之外的像素;根据掩码图像,对真实图像和虚拟图像进行合成,得到合成图像。
例如,第一尺寸可以是848*480,300*150等,第一尺寸的大小可以根据显示需求、虚拟图像和/或真实图像等进行调整,本申请不作限制。
第三方面,本申请实施例提供一种视频透视系统,包括:摄像头模组、中央处理器、图形处理器、图像合成芯片、显示器;摄像头模组,用于捕获真实世界场景对应的真实图像,并直接发送给图像合成芯片;中央处理器和图形处理器用于生成包含虚拟物体的虚拟图像,并发送给图像合成芯片;图像合成芯片用于并行获取真实图像、以及虚拟图像;并根据真实图像、以及虚拟图像的获取结果,确定第一图像并发送给显示器,第一图像为真实图像或真实图像与虚拟图像的合成图像;显示器用于显示第一图像。
该视频透视系统中,将用于对虚拟图像和真实图像进行合成的算法在图像合成芯片中硬化,可以减少对虚拟图像和真实图像进行合成时的运算时延。
第四方面,本申请实施例提供一种电子设备,该电子设备可以是视频透视设备,如:视频透视头戴式设备、视频透视眼镜等。该电子设备包括:处理器,用于存储处理器可执行指令的存储器;处理器被配置为执行所述指令时,使得电子设备实现如第一方面所述的方法。
第五方面,本申请实施例提供一种计算机可读存储介质,其上存储有计算机程序指令;当所述计算机程序指令被电子设备执行时,使得电子设备实现如第一方面所述的方法。
上述第二方面至第五方面所具备的有益效果,可参考第一方面中所述,在此不再赘述。
第六方面,本申请实施例提供一种计算机程序产品,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,使得电子设备实现前述第一方面所述的方法。
应当理解的是,本申请中对技术特征、技术方案、有益效果或类似语言的描述并不是暗示在任意的单个实施例中可以实现所有的特点和优点。相反,可以理解的是对于特征或有益效果的描述意味着在至少一个实施例中包括特定的技术特征、技术方案或有益效果。因此,本说明书中对于技术特征、技术方案或有益效果的描述并不一定是指相同的实施例。进而,还可以任何适当的方式组合本实施例中所描述的技术特征、技术方案和有益效果。本领域技术人员将会理解,无需特定实施例的一个或多个特定的技术特征、技术方案或有益效果即可实现实施例。在其他实施例中,还可在没有体现所有实施例的特定实施例中识别出额外的技术特征和有益效果。
附图说明
图1示出了一种视频透视的原理示意图;
图2示出了本申请实施例提供的视频透视系统的结构示意图;
图3示出了本申请实施例提供的一种虚拟图像的示意图;
图4示出了本申请实施例提供的一种真实图像的示意图;
图5示出了本申请实施例提供的一种合成图像的示意图;
图6示出了本申请实施例提供的一种视频透视装置的结构示意图;
图7示出了本申请实施例提供的视频透视系统的另一结构示意图。
具体实施方式
随着虚拟现实(virtual reality,VR)设备的发展,基于摄像头模组的视频透视(video see through)技术逐渐成为主流技术,拥有看到外界、电子围栏、MR应用等广泛的应用场景。例如,视频透视技术可以应用于虚拟现实头盔上,为虚拟现实头盔同时赋予增强现实(augmented reality,AR)的功能。其中,视频透视技术是指:通过摄像头(或称为摄像头模组)捕获真实世界的真实图像,根据真实图像生成虚拟图像,然后将虚拟图像和真实图像合成后进行显示的技术。
示例性地,图1示出了一种视频透视的原理示意图。如图1所示,目前,在视频透视技术中,摄像头模组可以捕获真实世界场景,得到真实世界场景对应的真实图像(或视频流)并传输给中间处理模块。
中间处理模块可以包括:即时定位与地图构建(simultaneously localization and mapping,SLAM)模块、平面检测模块、虚拟物体生成模块、虚拟现实合成模块。
其中,SLAM模块可以根据摄像头模组和其他传感器所处的环境来定位,并同时根据真实图像绘制环境结构。其他传感器可以包括陀螺仪、加速度计、红外传感器等,如:SLAM模块可以获取陀螺仪采集的旋转、平移等位姿信息并绘制环境结构。
平面检测模块可以检测真实图像中哪些是平面,如:桌面、地面等。虚拟物体生成 模块可以结合SLAM模块和平面检测模块的处理结果,生成虚拟物体,得到包含虚拟物体的虚拟图像并传输给虚拟现实合成模块。虚拟现实合成模块可以将虚拟物体生成模块输出的虚拟图像与摄像头模组捕获的真实图像进行合成,得到合成图像并传输给显示模块。
显示模块可以对合成图像进行显示,如:通过显示器呈现在人眼之前。
请继续参考图1所示,假设在上述如图1所示的视频透视的过程中,摄像头模组获取真实图像的步骤耗时为t 0,摄像头模组将真实图像传输给中间处理模块的步骤耗时为t 1,SLAM模块的处理步骤耗时为t 2,平面检测模块的处理步骤耗时为t 3,虚拟物体生成模块的处理步骤耗时为t 4,虚拟现实合成模块得到合成图像的处理步骤耗时为t 5,虚拟现实合成模块将合成图像传输给显示模块的步骤耗时为t 6,显示模块对合成图像进行显示的步骤耗时为t 7,则视频透视过程的整体时延T total如下:
Figure PCTCN2021119608-appb-000001
其中,
Figure PCTCN2021119608-appb-000002
表示从t 0至t 7的和。也即,整体时延T total也可以表示为:
T total=t 0+t 1+t 2+t 3+t 4+t 5+t 6+t 7
在其他中间处理模块还包括更多子模块的场景中,视频透视的整体时延可能还会进一步加大,大于上述T total的值。
结合上述视频透视的原理可知,当用户使用视频透视设备(如:头戴式设备)时,用户和现实之间会存在“时间错位”现象,即,用户眼中看到的合成图像相较于真实场景而言,会存在上述T total的时延。而这种时延越大,时间错位现象会越明显。例如,用户佩视频透视头戴式设备时,若用手去拿某个物体,则大脑可能已经感知到手碰触到了该物体,但眼睛却经过T total的时延后才能看到。
在此背景技术下,本申请实施例提供一种视频透视系统,该视频透视系统可以通过将真实图像和虚拟物体分开渲染后进行合成的方式,降低视频透视过程中从获取真实图像到显示合成图像的整体时延。下面结合附图对本申请实施例进行示例性说明。
图2示出了本申请实施例提供的视频透视系统的结构示意图。如图2所示,该视频透视系统可以包括:摄像头模组、中间处理模块、显示模块、以及其他传感器。
其中,摄像头模组、中间处理模块、显示模块、以及其他传感器的具体解释可以参考前述实施例中所述。例如,摄像头模组可以捕获真实世界场景,得到真实世界场景对应的真实图像并传输给中间处理模块。中间处理模块可以包括:SLAM模块、平面检测模块、虚拟物体生成模块、虚拟现实合成模块,中间处理模块中各模块可以实现与前述实施例相同的功能。
可选地,摄像头模组也称为相机(camera)成像模块,具体可以包括镜头(lens)、滤光片、图像传感器(sensor)、图像处理器(image signal processor,ISP)等,在此不再一一赘述。
可选地,图2所示的中间处理模块中,SLAM模块和平面检测模块可以在中央处理器(central processing unit,CPU)上实现。虚拟物体生成模块可以在图形处理器(graphics processing unit,GPU)上实现。虚拟现实合成模块可以是一个单独的芯片,用于实现虚拟图像和真实图像的合成功能。本申请在此不作限制。
显示模块可以是一个显示器,可以对合成图像进行显示。如:显示模块可以是视频透视头戴式设备上的显示器。
另外,上述图2中所示的多个模块,可以集成于一个设备上,如:视频透视头戴式设备。或者,也可以分别部署于多个设备上,组成视频透视系统。例如,摄像头模组可以是连接到互联网的网络摄像头、一些单独的图像采集设备(如:摄影机)等。摄像头模组可以与个人计算机(personal computer,PC)或手机连接,将采集到的真实图像发送给PC或手机。PC或手机中的CPU和GPU作为算法处理设备实现上述中间处理模块的功能,PC或手机的显示屏实现上述显示模块的功能。本申请对此同样不作限制。
基于上述图2所示的视频透视系统,本申请实施例中,摄像头模组获取到真实图像后,会直接将真实图像发送给虚拟现实合成模块。SLAM模块、平面检测模块、以及虚拟物体生成模块依次进行处理,得到虚拟图像后,虚拟物体生成模块会将虚拟图像发送给虚拟现实合成模块。虚拟现实合成模块可以对接收到的虚拟图像和真实图像进行合成,得到合成图像,并将合成图像传输给显示模块进行显示。
上述图2所示的视频透视系统中,摄像头模组获取真实图像的步骤、SLAM模块的处理步骤、平面检测模块的处理步骤、虚拟物体生成模块的处理步骤、以及显示模块对合成图像进行显示的步骤,均与上述图1所示的视频透视过程相同。基于此,请继续参考图2所示,假设摄像头模组获取真实图像的步骤耗时也为t 0,SLAM模块的处理步骤耗时也为t 2,平面检测模块的处理步骤耗时也为t 3,虚拟物体生成模块的处理步骤耗时也为t 4,显示模块对合成图像进行显示的步骤耗时为t 7。另外,假设摄像头模组将真实图像传输给虚拟现实合成模块的步骤耗时为t 1_new,虚拟现实合成模块得到合成图像的处理步骤耗时为t 5_new,虚拟现实合成模块将合成图像传输给显示模块的步骤耗时为t 6_new。则本申请实施例中,视频透视过程的整体时延T _total如下:
T _total=t 0+t 1_new+t 5_new+t 6_new+t 7
与图1所示的现有的视频透视过程相比,本申请实施例提供的该视频透视系统中,中间处理模块中SLAM模块的处理步骤、平面检测模块的处理步骤、以及虚拟物体生成模块的处理步骤,与摄像头模组将真实图像传输给虚拟现实合成模块的步骤和虚拟现实合成模块得到合成图像的处理步骤是并行执行的。也即,虚拟图像和真实图像是分开渲染后再合成,虚拟现实合成模块需要进行合成的真实图像不会经过SLAM模块、平面检测模块、虚拟物体生成模块等中间处理模块。因此,本申请实施例中视频透视过程的整体时延T _total远小于现有的视频透视过程的整体时延T total。从而,本申请实施例可以有效降低视频透视系统的整体系统时延,大大减少用户和现实的“时间错位”现象造成的负面影响。
下面对本申请实施例中虚拟现实合成模块的处理过程进行简单说明。
在实际实现中,虚拟现实合成模块的处理包括如下两种场景。
场景1:虚拟现实合成模块接收到了来自摄像头模组的真实图像、以及来自虚拟物体生成模块的虚拟图像。
场景2:虚拟现实合成模块接收到了来自摄像头模组的真实图像、但没有接收到来自虚拟物体生成模块的虚拟图像。
在一种可能的设计中,对于上述场景1,虚拟现实合成模块可以对真实图像和虚拟图像进行合成,然后将合成图像发送给显示模块进行显示。对于上述场景2,虚拟现实合成模块可以在等到接收到虚拟图像后,进行与场景1类似的操作。
例如,若某个基于视频透视技术的虚拟现实头盔在第一时刻(如:第一时刻可以是 打开虚拟现实头盔的启动开关的时刻)时,通过摄像头模组获取真实场景的第1帧真实图像并发送给虚拟现实合成模块,此时,虚拟物体生成模块还没有生成包含虚拟物体的虚拟图像。则,虚拟现实合成模块可以在接收到第1帧真实图像时,判断是否接收到虚拟图像,若没有接收到虚拟图像,则虚拟现实合成模块可以等待虚拟图像。假设在第k帧真实图像时接收到了虚拟图像,则虚拟现实合成模块会将第1帧真实图像至第k帧真实图像分别与虚拟图像进行合成,并将合成图像依次按照对应的真实图像的顺序发送给显示模块进行显示。类似地,之后,虚拟现实合成模块接收到第k+1帧真实图像、第k+2帧真实图像、第k+3帧真实图像等时,都采用与第k帧真实图像相同的处理方式。其中,k为大于1的整数,可以是2、3、5、8、10等,不作限制。
在另一种可能的设计中,对于上述场景1,虚拟现实合成模块可以对真实图像和虚拟图像进行合成,然后将合成图像发送给显示模块进行显示。对于上述场景2,虚拟现实合成模块会直接将真实图像发送给显示模块进行显示。
例如,若某个基于视频透视技术的虚拟现实头盔在第一时刻(如:第一时刻可以是打开虚拟现实头盔的启动开关的时刻)时,通过摄像头模组获取真实场景的第1帧真实图像并发送给虚拟现实合成模块,此时,虚拟物体生成模块还没有生成包含虚拟物体的虚拟图像。则,虚拟现实合成模块可以在接收到第1帧真实图像时,判断是否接收到虚拟图像,由于没有接收到虚拟图像,所以虚拟现实合成模块会直接将第1帧真实图像发送给显示模块进行显示。类似地,虚拟现实合成模块后续接收到第2帧真实图像、第3帧真实图像、第4帧真实图像等时,若均没有接收到虚拟图像,则都采用与第1帧真实图像相同的处理方式。若虚拟现实合成模块在接收到第k帧真实图像时,判断接收到了虚拟图像,则虚拟现实合成模块可以对将第k帧真实图像和虚拟图像进行合成,并将合成图像发送给显示模块进行显示。类似地,之后,虚拟现实合成模块接收到第k+1帧真实图像、第k+2帧真实图像、第k+3帧真实图像等时,都采用与第k帧真实图像相同的处理方式。其中,k为大于1的整数,可以是2、3、5、8、10等,不作限制。
该示例性说明中,虚拟物体生成模块对第1帧真实图像至第k-1帧真实图像的处理方式符合上述场景2,虚拟物体生成模块对第k帧真实图像至第k帧之后的每一帧真实图像的处理方式符合上述场景1。
本设计中,虚拟现实合成模块在接收到了来自摄像头模组的真实图像、但没有接收到来自虚拟物体生成模块的虚拟图像时,直接将真实图像发送给显示模块进行显示,能够减少视频透视系统在初始启动时显示模块(或显示器)无画面或空白画面的时长。
举例说明,对于上述前一种设计而言,当用户使用视频透视头戴式设备时,在刚刚打开视频透视头戴式设备的显示器时,显示器在最初的一段时间内无画面(或显示空白画面),这段时间即是上述前一种设计的示例性说明中所述的第1帧真实图像至第k帧真实图像的间隔时长。而本设计中,当用户使用视频透视头戴式设备时,在刚刚打开视频透视头戴式设备的显示器时,显示器会立刻显示实时拍摄到的真实场景,不会存在无画面的现象,从而可以优化用户的使用体验。
下面结合图3-图5,对本申请实施例中虚拟现实合成模块对虚拟图像和真实图像进行合成的具体实现原理进行说明。其中,图3示出了本申请实施例提供的一种虚拟图像的示意图,图4示出了本申请实施例提供的一种真实图像的示意图,图5示出了本申请实施例提供的一种合成图像的示意图。
请参考图3-图5所示:图3所示为虚拟物体生成模块生成的包含虚拟物体的虚拟图 像,其中,未填充的环形空白区域表示虚拟物体,虚拟物体在虚拟图像中所占的区域为有效像素,斜线填充的区域为无效像素。图4所示为摄像头模组获取的一种真实场景的真实图像。当虚拟现实合成模块接收到图3所示的虚拟图像、以及图4所示的真实图像后,可以将图3所示的虚拟图像中的无效像素剔除,并将剔除无效像素后的虚拟图像(此时,只包含虚拟物体所在区域的有效像素)与图4所示的真实图像进行合成,得到如图5所示的合成图像。然后,虚体现实合成模块可以将图5所示的合成图像发送给显示模块进行显示。
可选地,虚拟现实合成模块将虚拟图像与真实图像进行合成的步骤可以参考如下步骤1)至3)所述。
1)将虚拟图像和真实图像调整为第一尺寸,如调整为像素数为M*N的图像,使得虚拟图像与真实图像像素对齐。
示例性地,M*N可以是848*480,300*150等,M和N的大小可以根据显示需求、虚拟图像和/或真实图像等进行调整,本申请对M和N的大小不作限制。
2)对于1)中得到的M*N像素的虚拟图像,将虚拟图像的无效像素识别为0,有效像素识别为1,生成M*N的1比特(bit)掩码图像X。
3)将真实图像和虚拟物体进行合成,根据掩码图像X,如果M*N像素中的第i行第j列掩码为1,则使用虚拟图像中第i行第j列的像素,否则使用真实图像中的第i行第j列的像素。其中,i为大于0、小于或等于M的整数,j为大于0、小于或等于N的整数。
按照上述步骤1)至3)所述的方式,虚拟现实合成模块即可将虚拟图像与真实图像进行合成,得到合成图像,合成图像中会包含虚拟物体。
可选地,本申请一些实施例中,可以将用于实现虚拟现实合成模块的功能的算法(如:上述掩码合成的算法)在芯片中硬化,用该芯片作为虚拟现实合成模块,以减少虚拟现实合成模块的运算时延。
可选地,本申请实施例中,通过用于实现虚拟现实合成模块的功能的芯片,可以使用移动行业处理器接口-相机串行接口(mobile industry processor interface-camera serial interface,MIPI-CSI)协议、移动行业处理器接口-显示串行接口(mobile industry processor interface-display serial interface,MIPI-DSI)协议等更高带宽的通信协议,将图像(真实图像或合成图像)发送给显示模块进行显示,以减少传输时延,从而进一步降低视频透视系统的整体系统时延,减少人和现实的“时间错位”现象造成的负面影响。
基于前述实施例提供的视频透视系统,本申请实施例还提供一种视频透视方法,可以应用于该视频透视系统。例如,该方法的执行主体可以是视频透视系统中的虚拟现实合成模块、或具有虚拟现实合成模块的功能的某个芯片。该视频透视方法包括:并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像;根据真实图像、以及虚拟图像的获取结果,确定第一图像,第一图像为真实图像或真实图像与虚拟图像的合成图像;显示第一图像。
例如,在一种可能的设计中,所述根据真实图像、以及虚拟图像的获取结果,确定第一图像,包括:对每一帧真实图像,若在获取到真实图像后,还未获取到虚拟图像,则等到获取到虚拟图像后,对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
可以理解的,若在获取到真实图像后,已未获取到虚拟图像,则直接对真实图像和 虚拟图像进行合成,得到合成图像作为第一图像。
又例如,在另外一种可能的设计中,所述根据真实图像、以及虚拟图像的获取结果,确定第一图像,包括:对获取到虚拟图像之前获取到的真实图像,直接确定真实图像为第一图像;对获取到虚拟图像之后获取到的真实图像,对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
该方法的具体实现过程可以参考前述实施例中所述。
可选地,本申请前述实施例中所述的中间处理模块中的SLAM模块、平面检测模块等也可以替换为其他能够实现相同功能的算法对应的模块,如:深度学习算法,在此不作限制。
对应于前述实施例中所述的方法,本申请实施例还提供一种视频透视装置,可以用于实现前述视频透视方法。该装置的功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。硬件或软件包括一个或多个与上述功能相对应的模块或单元。例如,图6示出了本申请实施例提供的一种视频透视装置的结构示意图。如图6所示,该视频透视装置可以包括获取单元601、合成单元602、显示单元603。
其中,获取单元601,用于并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像;合成单元602,用于根据真实图像、以及虚拟图像的获取结果,确定第一图像,第一图像为真实图像或真实图像与虚拟图像的合成图像;显示单元603,用于显示第一图像。
示例性地,获取单元601可以获取摄像头模组捕获的真实世界场景对应的真实图像,以及并行获取虚拟物体生成模块生成的虚拟图像。显示单元603可以将第一图像发送给显示器进行显示,或者显示单元603本身也可以是显示器等。
在一种可能的设计中,合成单元602,具体用于对每一帧真实图像,若在获取到真实图像后,还未获取到虚拟图像,则等到获取到虚拟图像后,对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。若在获取到真实图像后,已未获取到虚拟图像,则直接对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
在另外一种可能的设计中,合成单元602,具体用于对获取到虚拟图像之前获取到的真实图像,直接确定真实图像为第一图像;对获取到虚拟图像之后获取到的真实图像,对真实图像和虚拟图像进行合成,得到合成图像作为第一图像。
可选地,合成单元602,具体用于将真实图像和虚拟图像调整为第一尺寸;将虚拟图像中的有效像素识别为1,无效像素识别为0,得到虚拟图像对应的掩码图像;其中,有效像素是虚拟物体在虚拟图像中所占的像素,无效像素是虚拟图像中除有效像素之外的像素;根据掩码图像,对真实图像和虚拟图像进行合成,得到合成图像。
应理解以上装置中单元或模块(以下均称为单元)的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。且装置中的单元可以全部以软件通过处理元件调用的形式实现;也可以全部以硬件的形式实现;还可以部分单元以软件通过处理元件调用的形式实现,部分单元以硬件的形式实现。
例如,各个单元可以为单独设立的处理元件,也可以集成在装置的某一个芯片中实现,此外,也可以以程序的形式存储于存储器中,由装置的某一个处理元件调用并执行该单元的功能。此外这些单元全部或部分可以集成在一起,也可以独立实现。这里所述的处理元件又可以称为处理器,可以是一种具有信号的处理能力的集成电路。在实现过程中,上述方法的各步骤或以上各个单元可以通过处理器元件中的硬件的集成逻辑电路 实现或者以软件通过处理元件调用的形式实现。
在一个例子中,以上装置中的单元可以是被配置成实施以上方法的一个或多个集成电路,例如:一个或多个专用集成电路(application specific integrated circuit,ASIC),或,一个或多个数字信号处理器(digital signal process,DSP),或,一个或者多个现场可编辑逻辑门阵列(field programmable gate array,FPGA),或这些集成电路形式中至少两种的组合。
再如,当装置中的单元可以通过处理元件调度程序的形式实现时,该处理元件可以是通用处理器,例如CPU或其它可以调用程序的处理器。再如,这些单元可以集成在一起,以片上系统(system-on-a-chip,SOC)的形式实现。
在一种实现中,以上装置实现以上方法中各个对应步骤的单元可以通过处理元件调度程序的形式实现。例如,该装置可以包括处理元件和存储元件,处理元件调用存储元件存储的程序,以执行以上方法实施例所述的方法。存储元件可以为与处理元件处于同一芯片上的存储元件,即片内存储元件。
在另一种实现中,用于执行以上方法的程序可以在与处理元件处于不同芯片上的存储元件,即片外存储元件。此时,处理元件从片外存储元件调用或加载程序于片内存储元件上,以调用并执行以上方法实施例所述的方法。
例如,本申请实施例还可以提供一种装置,如:电子设备,可以包括:处理器,用于存储该处理器可执行指令的存储器。该处理器被配置为执行上述指令时,使得该电子设备实现如前述实施例所述的方法。例如,该电子设备可以是前述实施例中所述的视频透视头戴式设备。该存储器可以位于该电子设备之内,也可以位于该电子设备之外。且该处理器包括一个或多个。
在又一种实现中,该装置实现以上方法中各个步骤的单元可以是被配置成一个或多个处理元件,这里的处理元件可以为集成电路,例如:一个或多个ASIC,或,一个或多个DSP,或,一个或者多个FPGA,或者这些类集成电路的组合。这些集成电路可以集成在一起,构成芯片。
例如,本申请实施例还提供一种芯片,该芯片可以应用于上述电子设备。芯片包括一个或多个接口电路和一个或多个处理器;接口电路和处理器通过线路互联;处理器通过接口电路从电子设备的存储器接收并执行计算机指令,以实现以上方法实施例中所述的方法。
本申请实施例还提供一种计算机程序产品,包括计算机可读代码,当计算机可读代码在电子设备中运行时,使得电子设备实现前述实施例中所述的方法。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,如:程序。该软件产品存储在一个程序产品,如计算机可读存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。
例如,本申请实施例还可以提供一种计算机可读存储介质,其上存储有计算机程序指令。当计算机程序指令被电子设备执行时,使得电子设备实现如前述方法实施例中所述的方法。
可选地,本申请实施例还提供一种视频透视系统。图7示出了本申请实施例提供的视频透视系统的另一结构示意图。如图7所示,该视频透视系统包括:摄像头模组701、中央处理器702、图形处理器703、图像合成芯片704、显示器705;摄像头模组701,用于捕获真实世界场景对应的真实图像,并直接发送给图像合成芯片704;中央处理器702和图形处理器703用于生成包含虚拟物体的虚拟图像,并发送给图像合成芯片704;图像合成芯片704用于并行获取真实图像、以及虚拟图像;并根据真实图像、以及虚拟图像的获取结果,确定第一图像并发送给显示器705,第一图像为真实图像或真实图像与虚拟图像的合成图像;显示器705用于显示第一图像。
也即,图7所示的视频透视系统中,中央处理器702能够实现如前述实施例中所述的SLAM模块和平面检测模块所能实现的功能,图形处理器703能够实现如前述实施例中所述的虚拟物体生成模块所能实现的功能,图像合成芯片704能够实现如前述实施例中所述的虚拟现实合成模块所能实现的功能。
可选地,该视频透视系统还包括红外传感器、陀螺仪等其他传感器,图7中未在示出。
图7所示的视频透视系统中,将用于对虚拟图像和真实图像进行合成的算法在图像合成芯片中硬化,可以减少对虚拟图像和真实图像进行合成时的运算时延。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (10)

  1. 一种视频透视方法,其特征在于,所述方法包括:
    并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像;
    根据所述真实图像、以及所述虚拟图像的获取结果,确定第一图像,所述第一图像为所述真实图像或所述真实图像与所述虚拟图像的合成图像;
    显示所述第一图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述真实图像、以及所述虚拟图像的获取结果,确定第一图像,包括:
    对每一帧所述真实图像,若在获取到所述真实图像后,还未获取到所述虚拟图像,则等到获取到所述虚拟图像后,对所述真实图像和所述虚拟图像进行合成,得到合成图像作为所述第一图像。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述真实图像、以及所述虚拟图像的获取结果,确定第一图像,包括:
    对获取到所述虚拟图像之前获取到的真实图像,直接确定所述真实图像为所述第一图像;
    对获取到所述虚拟图像之后获取到的真实图像,对所述真实图像和所述虚拟图像进行合成,得到合成图像作为所述第一图像。
  4. 根据权利要求2或3所述的方法,其特征在于,所述对所述真实图像和所述虚拟图像进行合成,得到合成图像,包括:
    将所述真实图像和所述虚拟图像调整为第一尺寸;
    将所述虚拟图像中的有效像素识别为1,无效像素识别为0,得到所述虚拟图像对应的掩码图像;其中,所述有效像素是所述虚拟物体在所述虚拟图像中所占的像素,所述无效像素是所述虚拟图像中除所述有效像素之外的像素;
    根据所述掩码图像,对所述真实图像和所述虚拟图像进行合成,得到所述合成图像。
  5. 一种视频透视装置,其特征在于,所述装置包括:
    获取单元,用于并行获取真实世界场景对应的真实图像、以及包含虚拟物体的虚拟图像;
    合成单元,用于根据所述真实图像、以及所述虚拟图像的获取结果,确定第一图像,所述第一图像为所述真实图像或所述真实图像与所述虚拟图像的合成图像;
    显示单元,用于显示所述第一图像。
  6. 根据权利要求5所述的装置,其特征在于,所述合成单元,具体用于对每一帧所述真实图像,若在获取到所述真实图像后,还未获取到所述虚拟图像,则等到获取到所述虚拟图像后,对所述真实图像和所述虚拟图像进行合成,得到合成图像作为所述第一图像。
  7. 根据权利要求5所述的装置,其特征在于,所述合成单元,具体用于对获取到所述虚拟图像之前获取到的真实图像,直接确定所述真实图像为所述第一图像;对获取到所述虚拟图像之后获取到的真实图像,对所述真实图像和所述虚拟图像进行合成,得到合成图像作为所述第一图像。
  8. 一种视频透视系统,其特征在于,包括:摄像头模组、中央处理器、图形处理器、图像合成芯片、显示器;
    所述摄像头模组,用于捕获真实世界场景对应的真实图像,并直接发送给所述图像合成芯片;
    所述中央处理器和所述图形处理器用于生成包含虚拟物体的虚拟图像,并发送给所述图像合成芯片;
    所述图像合成芯片用于并行获取所述真实图像、以及所述虚拟图像;并根据所述真实图像、以及所述虚拟图像的获取结果,确定第一图像并发送给所述显示器,所述第一图像为所述真实图像或所述真实图像与所述虚拟图像的合成图像;
    所述显示器用于显示所述第一图像。
  9. 一种电子设备,其特征在于,包括:处理器,用于存储所述处理器可执行指令的存储器;
    所述处理器被配置为执行所述指令时,使得所述电子设备实现如权利要求1-4任一项所述的方法。
  10. 一种计算机可读存储介质,其上存储有计算机程序指令;其特征在于,
    当所述计算机程序指令被电子设备执行时,使得电子设备实现如权利要求1-4任一项所述的方法。
PCT/CN2021/119608 2020-10-31 2021-09-22 视频透视方法、装置、系统、电子设备及存储介质 WO2022089100A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011198831.6A CN114449251B (zh) 2020-10-31 2020-10-31 视频透视方法、装置、系统、电子设备及存储介质
CN202011198831.6 2020-10-31

Publications (1)

Publication Number Publication Date
WO2022089100A1 true WO2022089100A1 (zh) 2022-05-05

Family

ID=81357908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/119608 WO2022089100A1 (zh) 2020-10-31 2021-09-22 视频透视方法、装置、系统、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114449251B (zh)
WO (1) WO2022089100A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115082795A (zh) * 2022-07-04 2022-09-20 梅卡曼德(北京)机器人科技有限公司 虚拟图像的生成方法、装置、设备、介质及产品

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060244820A1 (en) * 2005-04-01 2006-11-02 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20100182340A1 (en) * 2009-01-19 2010-07-22 Bachelder Edward N Systems and methods for combining virtual and real-time physical environments
CN106055113A (zh) * 2016-07-06 2016-10-26 北京华如科技股份有限公司 一种混合现实的头盔显示系统及控制方法
CN108037863A (zh) * 2017-12-12 2018-05-15 北京小米移动软件有限公司 一种显示图像的方法和装置
CN108924540A (zh) * 2017-08-08 2018-11-30 罗克韦尔柯林斯公司 低延迟混合现实头部可佩戴装置
CN111415422A (zh) * 2020-04-17 2020-07-14 Oppo广东移动通信有限公司 虚拟对象调整方法、装置、存储介质与增强现实设备

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103116451B (zh) * 2013-01-25 2018-10-26 腾讯科技(深圳)有限公司 一种智能终端的虚拟角色交互方法、装置和系统
CN104134229A (zh) * 2014-08-08 2014-11-05 李成 实时交互的增强现实系统以及方法
US10580040B2 (en) * 2016-04-03 2020-03-03 Integem Inc Methods and systems for real-time image and signal processing in augmented reality based communications
CN107077755B (zh) * 2016-09-30 2021-06-04 达闼机器人有限公司 虚拟与现实融合方法、系统和虚拟现实设备
CN110244840A (zh) * 2019-05-24 2019-09-17 华为技术有限公司 图像处理方法、相关设备及计算机存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060244820A1 (en) * 2005-04-01 2006-11-02 Canon Kabushiki Kaisha Image processing method and image processing apparatus
US20100182340A1 (en) * 2009-01-19 2010-07-22 Bachelder Edward N Systems and methods for combining virtual and real-time physical environments
CN106055113A (zh) * 2016-07-06 2016-10-26 北京华如科技股份有限公司 一种混合现实的头盔显示系统及控制方法
CN108924540A (zh) * 2017-08-08 2018-11-30 罗克韦尔柯林斯公司 低延迟混合现实头部可佩戴装置
CN108037863A (zh) * 2017-12-12 2018-05-15 北京小米移动软件有限公司 一种显示图像的方法和装置
CN111415422A (zh) * 2020-04-17 2020-07-14 Oppo广东移动通信有限公司 虚拟对象调整方法、装置、存储介质与增强现实设备

Also Published As

Publication number Publication date
CN114449251A (zh) 2022-05-06
CN114449251B (zh) 2024-01-16

Similar Documents

Publication Publication Date Title
JP7408678B2 (ja) 画像処理方法およびヘッドマウントディスプレイデバイス
KR102358932B1 (ko) 시선 위치에 기초한 안정 평면 결정
WO2017113681A1 (zh) 一种基于虚拟现实技术的视频图像处理方法及装置
CN107844190B (zh) 基于虚拟现实vr设备的图像展示方法及装置
US11120632B2 (en) Image generating apparatus, image generating system, image generating method, and program
WO2018233217A1 (zh) 图像处理方法、装置和增强现实设备
WO2019053997A1 (ja) 情報処理装置、情報処理方法、及びプログラム
US11003408B2 (en) Image generating apparatus and image generating method
WO2022089100A1 (zh) 视频透视方法、装置、系统、电子设备及存储介质
CN115209057B (zh) 一种拍摄对焦方法及相关电子设备
WO2019098198A1 (ja) 画像生成装置、ヘッドマウントディスプレイ、画像生成システム、画像生成方法、およびプログラム
JP2021526693A (ja) ポーズ補正
KR20210113100A (ko) 멀티-카메라 또는 기타 환경을 위한 슈퍼-해상도 뎁스 맵 생성
JP2023036676A (ja) プロセスデータ共有のための方法及びデバイス
WO2018133312A1 (zh) 一种处理的方法及设备
JP6904684B2 (ja) 画像処理装置、画像処理方法、およびプログラム
CN110956571A (zh) 基于slam进行虚实融合的方法及电子设备
US11373273B2 (en) Method and device for combining real and virtual images
JPWO2021020150A5 (zh)
WO2023001113A1 (zh) 一种显示方法与电子设备
US11606498B1 (en) Exposure of panoramas
EP4293619A1 (en) Image processing method and related device
WO2021170127A1 (zh) 一种半身像的三维重建方法及装置
US11636708B2 (en) Face detection in spherical images
CN104320576B (zh) 一种用于便携式终端的图像处理方法及图像处理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21884830

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21884830

Country of ref document: EP

Kind code of ref document: A1