WO2021078268A1 - 全向视觉避障实现方法、系统、装置及存储介质 - Google Patents

全向视觉避障实现方法、系统、装置及存储介质 Download PDF

Info

Publication number
WO2021078268A1
WO2021078268A1 PCT/CN2020/123317 CN2020123317W WO2021078268A1 WO 2021078268 A1 WO2021078268 A1 WO 2021078268A1 CN 2020123317 W CN2020123317 W CN 2020123317W WO 2021078268 A1 WO2021078268 A1 WO 2021078268A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
obstacle avoidance
image data
omnidirectional
visual obstacle
Prior art date
Application number
PCT/CN2020/123317
Other languages
English (en)
French (fr)
Inventor
李昭早
Original Assignee
深圳市道通智能航空技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市道通智能航空技术有限公司 filed Critical 深圳市道通智能航空技术有限公司
Publication of WO2021078268A1 publication Critical patent/WO2021078268A1/zh
Priority to US17/660,504 priority Critical patent/US20220256097A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen

Definitions

  • the embodiments of the present invention relate to the field of unmanned aerial vehicles, and in particular to a method, system, device and storage medium for implementing omnidirectional visual obstacle avoidance.
  • UAV obstacle avoidance has been required to support omnidirectional obstacle avoidance in six directions: front, bottom, rear, left, right and top, and the coordinates of the same object in the two lens screens. Slightly different, the distance of the obstacle can be obtained after conversion, and the depth image of the obstacle can also be obtained by the binocular vision method. Therefore, to achieve omnidirectional visual obstacle avoidance, at least 6 pairs of 12 lenses are required, plus the main lens. There are 13 lenses in total, but the main chip currently on the market supports up to 8 lens inputs, which is far from meeting the needs of omnidirectional obstacle avoidance.
  • the existing image signal processor Image Signal Processing, ISP
  • the image processing of the main chip become a bottleneck.
  • ISP Image Signal Processing
  • UAV obstacle avoidance requires high real-time performance and fast processing speed, but the existing technology cannot meet this demand.
  • the image signals collected by multiple lenses of the existing drones cannot be processed quickly and in time, and there are also problems of insufficient processing efficiency and performance.
  • the main purpose of the present invention is to provide a method, system, device and storage medium for implementing omnidirectional visual obstacle avoidance, aiming to solve the problem of multiple lens access problems and image processing efficiency and performance problems of the existing UAV omnidirectional visual obstacle avoidance .
  • the present invention provides a method for implementing omnidirectional visual obstacle avoidance, which includes:
  • Step S10 Send a trigger signal to the image acquisition device, so that the acquisition device collects the image signal;
  • Step S20 Perform merging processing on the image signals to obtain merged image data
  • Step S30 Perform disassembly processing on the merged image data to obtain disassembled image data
  • Step S40 Perform visual processing on the disassembled image data to obtain a visual image.
  • the trigger signal is sent to the acquisition device through a synchronized trigger clock; furthermore, the trigger signal is a pulse signal.
  • the image signals are combined by an image signal processor (Image Signal Processing, ISP) to obtain combined image data.
  • image signal processor Image Signal Processing, ISP
  • the disassembly processing in the step S30 includes:
  • the combined image data is disassembled according to the start address offset of the combined image and the image width span size to obtain the disassembled image data.
  • the present invention also provides an omnidirectional visual obstacle avoidance realization system, including:
  • a synchronous trigger clock is used to send the trigger signal to the acquisition device to trigger the image acquisition device to acquire an image signal
  • the main chip is used for disassembling the merged image data and visually processing the disassembled image data to obtain a visual image.
  • the trigger signal is a pulse signal.
  • steps for the main chip to perform disassembly processing include:
  • the combined image data is disassembled according to the start address offset of the combined image and the image width span size to obtain the disassembled image data.
  • the present invention also provides an omnidirectional visual obstacle avoidance implementation device, the device includes a memory and a processor, and the memory stores an omnidirectional visual obstacle avoidance program that can run on the processor. , When the omnidirectional visual obstacle avoidance program is executed by the processor, the steps of the foregoing method for implementing omnidirectional visual obstacle avoidance are realized.
  • the present invention also provides a computer-readable storage medium, the storage medium stores an omnidirectional visual obstacle avoidance program, the omnidirectional visual obstacle avoidance program can be executed by one or more processors , In order to achieve the steps of the above-mentioned omnidirectional visual obstacle avoidance method.
  • the omnidirectional visual obstacle avoidance realization method, device and computer readable storage medium proposed in the present invention solve the problem of multiple lens access problems and insufficient image processing performance of the prior art UAV omnidirectional visual obstacle avoidance, and realize the UAV Omnidirectional visual obstacle avoidance.
  • FIG. 1 is a schematic flowchart of a method for implementing omnidirectional visual obstacle avoidance provided by an embodiment of the present invention.
  • Figure 2 is a schematic diagram of an omnidirectional visual obstacle avoidance implementation system according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of sending the trigger signal synchronously with a trigger clock according to an embodiment of the present invention
  • FIG. 4 is a schematic diagram of merging two channels of image signals into one channel according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of merging two channels of image signals into one channel and then combining them again according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of four channels of image signals directly combined into one channel provided by an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of a method for disassembling image data according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a second method for disassembling image data according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram of the internal structure of an omnidirectional visual obstacle avoidance implementation device provided by an embodiment of the present invention.
  • FIG. 10 is a schematic diagram of modules of an omnidirectional visual obstacle avoidance program in an omnidirectional visual obstacle avoidance implementation device provided by an embodiment of the present invention.
  • FIG. 1 is a schematic flow chart of a method for implementing omnidirectional visual obstacle avoidance provided by an embodiment of the present invention.
  • the method for implementing omnidirectional visual obstacle avoidance provided by the present invention is applied to an unmanned aerial vehicle.
  • the omnidirectional visual obstacle avoidance is implemented Methods include:
  • Step S10 Send a trigger signal to the image acquisition device so that the acquisition device acquires the image signal; specifically, send the trigger signal to the acquisition device through a synchronized trigger clock; further, the trigger signal is a pulse signal .
  • the image acquisition device is a drone lens, and the image acquisition device collects image signals after receiving the trigger signal.
  • Step S20 Perform merging processing on the image signals to obtain merged image data.
  • the image signals are combined by an image signal processor (Image Signal Processing, ISP) to obtain combined image data.
  • ISP Image Signal Processing
  • Step S30 Perform disassembly processing on the merged image data to obtain disassembled image data.
  • Step S40 Perform visual processing on the disassembled image data to obtain a visual image.
  • FIG. 2 is a schematic diagram of an omnidirectional visual obstacle avoidance implementation system provided by an embodiment of the present invention.
  • the omnidirectional visual obstacle avoidance system includes a synchronous trigger clock 100, multiple ISPs, and a main chip 200.
  • the synchronous trigger clock 100 is used to send the trigger signal to the acquisition device to trigger the image acquisition device to acquire an image signal
  • the ISP is used to merge the image signals to obtain merged image data.
  • the main chip 200 is used for disassembling the merged image data and performing visual processing on the disassembled image data to obtain a visual image.
  • the image acquisition device in this embodiment refers to multiple lenses of the drone in six orientations, the six orientations include the front, rear, up, and down, left, and right orientations of the drone, with 2 lenses in each orientation, each of which is the front left Lens 11, front right lens 12, rear left lens 21, rear right lens 22, lower left lens 31, lower right lens 32, upper left lens 41, upper right lens 42, left left lens 51, left and right lens 52, right and left lens 61 , Right and right lens 62.
  • FIG. 3 is a schematic diagram of a synchronous trigger clock sending the trigger signal according to an embodiment of the present invention.
  • the synchronous trigger clock periodically sends the pulse signal at regular intervals, as shown in FIG. 3, every t A pulse signal is sent every millisecond.
  • the t milliseconds are set according to the flying speed and processing speed of the drone. In this embodiment, 10 milliseconds, 40 milliseconds, and 100 milliseconds are respectively set for successful testing.
  • the synchronous trigger clock 100 sends pulse signals to all 12 After receiving the pulse signal, the 12 lenses trigger the acquisition of images to generate image signals.
  • the omnidirectional visual obstacle avoidance system includes four ISPs.
  • the front left lens 11 and the front right lens 12 output image signals to ISP1
  • Rear left lens 21 and rear right lens 22 output image signals to ISP2
  • lower left lens 31, lower right lens 32, upper left lens 41 and upper right lens 42 output image signals to ISP3
  • the lens 61 and the right lens 62 output image signals to the ISP4.
  • the combination is to sequentially combine image signals collected by multiple lenses into image data based on the image line number.
  • FIG. 4 is a schematic diagram of merging 2 channels of image signals into 1 channel according to an embodiment of the present invention.
  • the first line of the first image is placed on the first line of the target image, and the first line of the second image is placed on the target.
  • the second line of the image, the second line of the first image is placed on the third line of the target image, the second line of the second image is placed on the fourth line of the target image, and the third line of the first image is placed on the third line of the target image.
  • Five lines, the third line of the second image is placed on the sixth line of the target image..., so as to form a new target image.
  • the image collection itself is generated line by line from top to bottom.
  • the image line captured by the lens can be immediately sent to the ISP for merging.
  • the image line is merged through crossover and sent to the back-end processing immediately. In this way, there is no need to wait for an image to be completed.
  • the splicing process is performed after the acquisition, which reduces the delay time of data processing and also reduces the space used by the cache.
  • the ISP further includes image processing.
  • the image processing includes automatic exposure.
  • the multiple lenses have the same automatic exposure parameter settings and automatically perform exposure adjustment based on the ISP processed image. Since the left and right lenses on the same side are in the same direction, the image brightness is required to be the same, so the exposure parameters are the same. Exposure statistics can be based on the exposure statistics of a single left lens or right lens, or based on the statistics of two dual lenses combined.
  • the right lens will automatically follow the left lens for exposure Adjustment changes; if it is based on the right lens, when the image of the right lens changes, the left lens will follow the right lens to automatically adjust the exposure; if it is based on combined exposure, any single lens on the left and right will change, and both lenses will be adjusted at the same time, or both lenses will be adjusted at the same time. There is an image change, and the dual lenses are adjusted at the same time.
  • the lower left lens 31, the lower right lens 32, the upper left lens 41 and the upper left lens 42 simultaneously collect a frame of image data and then output it to ISP3 for merging processing.
  • the first merging method is to merge the two images into one image first, and then merge the merged two images again.
  • FIG. 5 is a schematic diagram of merging two channels of image signals into one channel and then merging again according to an embodiment of the present invention. After two channels are combined into one channel, the image data of the image merged by the ISP is output to main chip.
  • the second combination method is to directly merge four channels into one image. Please refer to FIG. 6, which is a schematic diagram of four channels of image signals directly combined into one channel according to an embodiment of the present invention.
  • the first method is to copy the merged image data in sequence according to the image line number to obtain the disassembled image data; the second method is to combine the merged image data.
  • the image data is disassembled according to the start address offset of the merged image and the image width span to obtain the disassembled image data.
  • FIG. 7 is a schematic diagram of a method for disassembling image data provided by an embodiment of the present invention.
  • the main chip obtains the merged image data, which needs to be disassembled into single-channel images and sent for visual processing.
  • the first method is to The merged image is disassembled and copied line by line.
  • Figure 7 shows the disassembly and restoration process of the four-channel merged image into one image.
  • the first line of image is disassembled to the first line of the target first image, and the second line is disassembled.
  • FIG. 8 is a schematic diagram of a second method for disassembling image data according to an embodiment of the present invention.
  • the disassembly and restoration is performed according to the image start address offset and span.
  • the end address of the first row of the image data in the memory is connected to the start address of the second row, and the end address of the second row is connected to the start address of the third row.
  • Set the image start address of the first column to p1 ,
  • the second column is a complete image.
  • the third column and the fourth column image are processed in the same way. Compared with the first method, no data copy is required.
  • the image data can be restored and disassembled through the start address offset and span enlargement.
  • the disassembly method for merging two channels into one image is similar.
  • the present invention also provides an omnidirectional visual obstacle avoidance realization device.
  • FIG. 9 is a schematic diagram of the internal structure of an omnidirectional visual obstacle avoidance device provided by an embodiment of the present invention.
  • the human-machine multi-lens omnidirectional visual obstacle avoidance device at least includes a memory 91, a processor 92, and a communication bus 93 , And the network interface 94.
  • the memory 91 includes at least one type of readable storage medium, and the readable storage medium includes flash memory, hard disk, multimedia card, card-type memory (for example, SD or DX memory, etc.), magnetic memory, magnetic disk, optical disk, and the like.
  • the memory 91 may be an internal storage unit of the omnidirectional visual obstacle avoidance implementation device, for example, the hard disk of the omnidirectional visual obstacle avoidance implementation device.
  • the memory 91 may also be an external storage device of the omnidirectional visual obstacle avoidance realization device, for example, a plug-in hard disk or a smart media card (SMC) equipped on the omnidirectional visual obstacle avoidance realization device. Secure Digital (SD) card, flash card (Flash Card), etc.
  • SD Secure Digital
  • flash card Flash Card
  • the memory 91 may also include both an internal storage unit of the omnidirectional visual obstacle avoidance realization device and an external storage device.
  • the memory 91 can be used not only to store application software and various data installed in the omnidirectional visual obstacle avoidance realization device, such as the code of the omnidirectional visual obstacle avoidance program, etc., but also to temporarily store data that has been output or will be output.
  • the processor 92 may be a central processing unit (CPU), an image signal processor ISP (Image Signal Processing, ISP), a controller, a microcontroller, a microprocessor, or other data processing chips , Used to run the program code or processing data stored in the memory 91, such as executing an omnidirectional visual obstacle avoidance program.
  • CPU central processing unit
  • ISP Image Signal Processing
  • controller a microcontroller
  • microprocessor a microprocessor, or other data processing chips
  • the communication bus 93 is used to realize the connection and communication between these components.
  • the network interface 94 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface), and is generally used to establish a communication connection between the omnidirectional visual obstacle avoidance implementation device and other electronic devices.
  • a standard wired interface and a wireless interface such as a WI-FI interface
  • the device for implementing omnidirectional visual obstacle avoidance may further include a user interface.
  • the user interface may include a display and an input unit such as a keyboard.
  • the optional user interface may also include a standard wired interface and a wireless interface.
  • the display may be an LED display, a liquid crystal display, a touch-sensitive liquid crystal display, an OLED (Organic Light-Emitting Diode, organic light-emitting diode) touch device, etc.
  • the display may also be appropriately called a display screen or a display unit, which is used to display the information processed in the omnidirectional visual obstacle avoidance realization device and to display a visualized user interface.
  • Figure 9 only shows an omnidirectional visual obstacle avoidance implementation device with components 91-94 and an omnidirectional visual obstacle avoidance program. Those skilled in the art can understand that the structure shown in Figure 9 does not constitute an omnidirectional visual obstacle avoidance device.
  • the limitation of the barrier realization device may include fewer or more components than shown in the figure, or a combination of certain components, or a different component arrangement.
  • the memory 91 stores an omnidirectional visual obstacle avoidance program; when the processor 92 executes the omnidirectional visual obstacle avoidance program stored in the memory 91, the following steps are implemented:
  • Step S10 Send a trigger signal to the image acquisition device, so that the acquisition device collects the image signal;
  • Step S20 Perform merging processing on the image signals to obtain merged image data
  • Step S30 Perform disassembly processing on the merged image data to obtain disassembled image data
  • Step S40 Perform visual processing on the disassembled image data to obtain a visual image.
  • the omnidirectional visual obstacle avoidance program can be divided into synchronous trigger modules 10.
  • the transmission module 20, the first processing module 30, the second processing module 40, and the setting module 50 exemplarily:
  • the synchronization trigger module 10 is used to perform the task of sending a synchronization trigger pulse signal
  • the transmission module 20 is used to perform transmission signal and data tasks
  • the first processing module 30 is used for the ISP to execute the first processing
  • the second processing module 40 is used for the main chip to execute the second processing
  • the setting module 50 is used to set the synchronization trigger interval time.
  • the embodiment of the present invention also provides a storage medium, the storage medium is a computer-readable storage medium, the storage medium stores an omnidirectional visual obstacle avoidance program, the omnidirectional visual obstacle avoidance program can be one or Multiple processors execute to achieve the following operations:
  • Step S10 Send a trigger signal to the image acquisition device, so that the acquisition device collects the image signal;
  • Step S20 Perform merging processing on the image signals to obtain merged image data
  • Step S30 Perform disassembly processing on the merged image data to obtain disassembled image data
  • Step S40 Perform visual processing on the disassembled image data to obtain a visual image.
  • the specific implementation of the storage medium of the present invention is basically the same as the foregoing embodiments of the omnidirectional visual obstacle avoidance method and device, and will not be repeated here.
  • sequence numbers of the above-mentioned embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
  • the terms “include”, “include” or any other variants thereof in this article are intended to cover non-exclusive inclusion, so that a process, device, article or method including a series of elements not only includes those elements, but also includes those elements that are not explicitly included.
  • the other elements listed may also include elements inherent to the process, device, article, or method. If there are no more restrictions, the element defined by the sentence "including one" does not exclude the existence of other identical elements in the process, device, article, or method that includes the element.
  • the technical solution of the present invention essentially or the part that contributes to the existing technology can be embodied in the form of a software product, and the computer software product is stored in a storage medium (such as ROM/RAM) as described above. , Magnetic disk, optical disk), including several instructions to make a terminal device (can be a drone, a mobile phone, a computer, a server, or a network device, etc.) execute the method described in each embodiment of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及无人机领域,提供一种全向视觉避障实现方法、系统、装置和存储介质。所述全向视觉避障实现方法包括:步骤S10、发送触发信号到图像采集设备,以使所述采集设备采集图像信号;步骤S20、对所述图像信号进行合并处理,以获得合并的图像数据;步骤S30、对所述合并的图像数据进行拆解处理,以获得拆解的图像数据;步骤S40、对所述拆解的图像数据进行视觉处理,以获得视觉图像。通过本发明提供的技术方案,解决现有无人机全向视觉避障多个镜头接入问题,同时提升图像处理效率和性能。

Description

全向视觉避障实现方法、系统、装置及存储介质 技术领域
本发明实施例涉及无人机领域,尤其涉及一种全向视觉避障实现方法、系统、装置及存储介质。
背景技术
随着无人机技术的发展,无人机避障已经要求支持前方、下方、后方、左方、右方和上方6个方向的全方位避障,同一个物体在两个镜头画面中的坐标稍有不同,经过转换即可得到障碍物的距离,双目视觉方法也可以获得障碍物的深度图像,因此,实现全向视觉避障,至少需要6对12个镜头,再加上主镜头,共13个镜头,但目前市面上的主芯片最多支持8个镜头输入,远远满足不了全向避障的需求。同时,针对采集到的多路图像信号,对现有的图像信号处理器(Image Signal Processing,ISP)和主芯片的图像处理就成为瓶颈,当同时有大量的图像信息需要同步处理时,单芯片无法满足同步处理大量图像信息的性能,同时,无人机避障要求实时性高,处理速度快,但现有技术不能满足这方面的需求。现有无人机多个镜头采集到的图像信号还不能及时快速处理,同时还存在处理效率和性能不足的问题。
发明内容
本发明主要目的是提供一种全向视觉避障实现方法、系统、装置及存储介质,旨在解决现有无人机全向视觉避障多个镜头接入问题及图像处理效率和性能的问题。
为了实现上述目的,本发明提供一种全向视觉避障实现方法,该方法包括:
步骤S10:发送触发信号到图像采集设备,以使所述采集设备采集图像信号;
步骤S20:对所述图像信号进行合并处理,以获得合并的图像数据;
步骤S30:对所述合并的图像数据进行拆解处理,以获得拆解的图像数 据;
步骤S40:对所述拆解的图像数据进行视觉处理,以获得视觉图像。
进一步地,通过同步触发时钟发送所述触发信号至所述采集设备;更进一步地,所述触发信号是脉冲信号。
进一步地,所述步骤S20中,通过图像信号处理器(Image Signal Processing,ISP)对所述图像信号进行合并处理以获得合并的图像数据。
进一步地,所述步骤S30中所述拆解处理包括:
将所述合并的图像数据根据所述图像行号依次拷贝出来,得到拆解的图像数据;或者
将所述合并的图像数据根据被合并图像起始地址偏移和图像宽度跨度大小进行拆解,得到拆解的图像数据。
此外,本发明还提供了一种全向视觉避障实现系统,包括:
同步触发时钟,用于发送所述触发信号至所述采集设备,以触发所述图像采集设备采集图像信号;
多个ISP及主芯片,用于对所述图像信号进行合并处理以获得合并的图像数据;及
主芯片,用于对所述合并的图像数据进行拆解处理及对所述拆解的图像数据进行视觉处理,以获得视觉图像。
进一步地,所述触发信号是脉冲信号。
进一步地,所述主芯片用于进行拆解处理的步骤包括:
将所述合并的图像数据根据所述图像行号依次拷贝,得到拆解的图像数据;或者
将所述合并的图像数据根据被合并图像起始地址偏移和图像宽度跨度大小进行拆解,得到拆解的图像数据。
为实现上述目的,本发明还提供了一种全向视觉避障实现装置,所述装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的全向视觉避障程序,所述全向视觉避障程序被所述处理器执行时实现上述全向视觉避障实现方法的步骤。
此外,为实现上述目的,本发明还提供一种计算机可读存储介质,所述存储介质上存储有全向视觉避障程序,所述全向视觉避障程序可被一个或者 多个处理器执行,以实现上述全向视觉避障方法的步骤。
本发明提出的全向视觉避障实现方法、装置及计算机可读存储介质,解决现有技术无人机全向视觉避障多个镜头接入问题及图像处理性能不足的问题,实现无人机全向视觉避障。
附图说明
图1为本发明一实施例提供的全向视觉避障实现方法流程示意图。
图2为本发明一实施例全向视觉避障实现系统示意图;
图3为本发明一实施例同步触发时钟发送所述触发信号示意图;
图4为本发明一实施例提供的图像信号2路合并成1路示意图;
图5为本发明一实施例提供的图像信号两路合并成一路然后再次进行合并的示意图;
图6为本发明一实施例提供的图像信号四路直接合并为一路的示意图;
图7为本发明一实施例提供的拆解图像数据方法一示意图;
图8为本发明一实施例提供的拆解图像数据方法二示意图;
图9为本发明一实施例提供的全向视觉避障实现装置的内部结构示意图;
图10为本发明一实施例提供的全向视觉避障实现装置中全向视觉避障程序的模块示意图。
具体实施方式
为了使本发明的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本发明进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本发明,并不用于限定本发明。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
请参阅图1,是本发明一实施例提供的全向视觉避障实现方法流程示意图,本发明提供的全向视觉避障实现方法,应用于无人机中,所述全向视觉避障实现方法包括:
步骤S10:发送触发信号到图像采集设备,以使所述采集设备采集图像信号;具体地,通过同步触发时钟发送所述触发信号至所述采集设备;更进 一步地,所述触发信号是脉冲信号。在一实施例中,所述图像采集设备是无人机的镜头,所述图像采集设备在收到所述触发信号之后会采集图像信号。
步骤S20:对所述图像信号进行合并处理,以获得合并的图像数据。具体地,通过图像信号处理器(Image Signal Processing,ISP)对所述图像信号进行合并处理以获得合并的图像数据。
步骤S30:对所述合并的图像数据进行拆解处理,以获得拆解的图像数据。
步骤S40:对所述拆解的图像数据进行视觉处理,以获得视觉图像。
请参阅图2,是本发明一实施例提供的全向视觉避障实现系统的示意图,所述全向视觉避障系统包括同步触发时钟100、多个ISP及主芯片200。所述同步触发时钟100用于发送所述触发信号至所述采集设备,以触发所述图像采集设备采集图像信号,所述ISP用于对所述图像信号进行合并处理以获得合并的图像数据。所述主芯片200用于对所述合并的图像数据进行拆解处理及对所述拆解的图像数据进行视觉处理,以获得视觉图像。
所述图像采集设备在本实施例中是指无人机六个方位的多个镜头,所述六个方位包括分布在无人机前后上下左右方位,每个方位2个镜头,分别为前左镜头11、前右镜头12、后左镜头21、后右镜头22、下左镜头31、下右镜头32、上左镜头41、上右镜头42、左左镜头51、左右镜头52、右左镜头61、右右镜头62。
请参阅图3,为本发明一实施例同步触发时钟发送所述触发信号示意图,所述同步触发时钟周期性每隔一段固定的时间发送一次所述脉冲信号,如图3所示,每隔t毫秒发送一次脉冲信号,所述t毫秒根据无人机飞行速度和处理速度设置,本实施例分别设置10毫秒、40毫秒、100毫秒进行成功测试,所述同步触发时钟100发送脉冲信号给所有12个镜头,该12个镜头在收到所述脉冲信号后触发采集图像,生成图像信号。
通过ISP将图像信号进行合并处理,如图2所示,在一实施例中,所述全向视觉避障实现系统包括四个ISP,前左镜头11和前右镜头12输出图像信号给ISP1,后左镜头21和后右镜头22输出图像信号给ISP2、下左镜头31、下右镜头32、上左镜头41和上右镜头42输出图像信号给ISP3、左左镜头51、左右镜头52、右左镜头61和右右镜头62输出图像信号给ISP4。
所述合并是基于图像行号依次将多个镜头采集的图像信号合并为图像数据。请参阅图4,是本发明一实施例提供的图像信号2路合并成1路示意图,第一幅图像的第一行放到目标图像第一行,第二幅图像的第一行放到目标图像第二行,第一幅图像的第二行放到目标图像第三行,第二幅图像的第二行放到目标图像第四行,第一幅图像的第三行放到目标图像第五行,第二幅图像的第三行放到目标图像第六行……,如此拼成新的目标图像。
图像采集本身是从上到下逐行产生的,镜头采集到图像行可立即送给ISP合并,通过交叉合并完图像行,立即送给后端处理,这种方式,不需要等待一张图像完全采集到之后再做拼接处理,降低了数据处理的延时时间,同时也降低了缓存使用空间。
所述ISP还包括图像处理,所述图像处理包括自动曝光,所述多个镜头的自动曝光参数设置相同并基于所述ISP处理的图像自动进行曝光调整。由于同侧左右镜头是同一方向的,要求图像亮度是一致的,所以曝光参数相同。曝光统计信息可以基于单个的左镜头或右镜头的曝光统计信息,或者基于两个双镜头合并之后的统计信息,如果基于左镜头,则左镜头图像变化时,右镜头会跟随左镜头自动进行曝光调整变化;如果基于右镜头,则右镜头图像变化时,左镜头会跟随右镜头自动进行曝光调整变化;如果基于合并曝光,则左右任意单镜头有变化,双镜头同时进行调整,或双镜头都有图像变化,双镜头同时进行调整。
再次参阅图1,下左镜头31、下右镜头32、上左镜头41和上左镜头42,同时采集到一帧图像数据之后输出给ISP3合并处理,左左镜头51、左右镜头52、右左镜头61和右左镜头62,同时采集到一帧图像数据之后输出给ISP4合并处理。
所述合并处理中,通过以下两种方式实现4路图像合并为1路图像:
第一种合并方法是先对两路图像合并成一路图像,然后再对合并后的两路图像再次合并。请参阅图5,是本发明一实施例提供的图像信号两路合并成一路然后再次进行合并的示意图,经过两次二路合并成一路的处理,将所述ISP完成图像合并的图像数据输出给主芯片。
第二种合并方法是直接完成四路合并成一路图像,请参阅图6,是本发明一实施例提供的图像信号四路直接合并为一路的示意图。
对所述合并的图像数据拆解处理有两种方法,方法一是将所述合并的图像数据根据所述图像行号依次拷贝出来,得到拆解的图像数据;方法二是将所述合并的图像数据根据被合并图像起始地址偏移和图像宽度跨度大小进行拆解,得到拆解的图像数据。
请参阅图7,是本发明一实施例提供的拆解图像数据方法一示意图,所述主芯片获取到合并后的图像数据,需要拆解成各个单路的图像再送视觉处理,方法一是把合并后的图像逐行拆解拷贝出来,图7中显示的是四路合并成一路图像的拆解还原过程,第一行图像拆解到目标第一幅图像的第一行,第二行拆解到目标第二幅图像的第一行,第三行拆解到目标第三幅图像第一行,第四行拆解到目标第四幅图像第一行,第五行拆解到目标第一幅图像第二行,第六行拆解到目标第二幅图像的第二行……,依次完成图像的拆解还原。
请参阅图8,是本发明一实施例提供的拆解图像数据方法二示意图,根据图像起始地址偏移和跨度进行拆解还原。图像数据在内存中第一行的尾地址和第二行的起始地址是相连的,第二行尾地址和第三行起始地址是相连的,设定第一列图像起始地址为p1,宽度为width,跨度为stride,则跨度stride=width*4,把其他三列图像通过跨度扩大方式当空白处理,则第一列就是就一副完整的图像。设定第二列起始地址为p2,宽度为width,跨度为stride=width*4,把其他三列类似的当作空白图像处理,则第二列就是一副完整的图像。同样,对第三列、第四列图像做同样的处理,相比所述方法一,不需要任何数据拷贝,通过起始地址偏移和跨度的放大,就实现了图像数据的还原拆解。2路合并为1路图像的拆解方法类似。
此外,本发明还提供一种全向视觉避障实现装置。
请参阅图9,是本发明实施例提供了一种全向视觉避障实现装置的内部结构示意图,所述人机多镜头全向视觉避障装置至少包括存储器91、处理器92、通信总线93,以及网络接口94。
其中,存储器91至少包括一种类型的可读存储介质,所述可读存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等)、磁性存储器、磁盘、光盘等。存储器91在一些实施例中可以是全向视觉避障实现装置的内部存储单元,例如该全向视觉避障实现装置的硬盘。存储器91 在另一些实施例中也可以是全向视觉避障实现装置的外部存储设备,例如全向视觉避障实现装置上配备的插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)等。进一步地,存储器91还可以既包括全向视觉避障实现装置的内部存储单元也包括外部存储设备。存储器91不仅可以用于存储安装于全向视觉避障实现装置的应用软件及各类数据,例如全向视觉避障程序的代码等,还可以用于暂时地存储已经输出或者将要输出的数据。
处理器92在一些实施例中可以是一中央处理器(Central Processing Unit,CPU)、图像信号处理器ISP(Image Signal Processing,ISP)、控制器、微控制器、微处理器或其他数据处理芯片,用于运行存储器91中存储的程序代码或处理数据,例如执行全向视觉避障程序等。
通信总线93用于实现这些组件之间的连接通信。
网络接口94可选的可以包括标准的有线接口、无线接口(如WI-FI接口),通常用于在该全向视觉避障实现装置与其他电子设备之间建立通信连接。
可选地,该全向视觉避障实现装置还可以包括用户接口,用户接口可以包括显示器(Display)、输入单元比如键盘(Keyboard),可选的用户接口还可以包括标准的有线接口、无线接口。可选地,在一些实施例中,显示器可以是LED显示器、液晶显示器、触控式液晶显示器以及OLED(Organic Light-Emitting Diode,有机发光二极管)触摸器等。其中,显示器也可以适当的称为显示屏或显示单元,用于显示在全向视觉避障实现装置中处理的信息以及用于显示可视化的用户界面。
图9仅示出了具有组件91-94以及全向视觉避障程序的全向视觉避障实现装置,本领域技术人员可以理解的是,图9示出的结构并不构成对全向视觉避障实现装置的限定,可以包括比图示更少或者更多的部件,或者组合某些部件,或者不同的部件布置。
在图9所示的全向视觉避障实现装置实施例中,存储器91中存储有全向视觉避障程序;处理器92执行存储器91中存储的全向视觉避障程序时实现如下步骤:
步骤S10:发送触发信号到图像采集设备,以使所述采集设备采集图像信号;
步骤S20:对所述图像信号进行合并处理,以获得合并的图像数据;
步骤S30:对所述合并的图像数据进行拆解处理,以获得拆解的图像数据;
步骤S40:对所述拆解的图像数据进行视觉处理,以获得视觉图像。
参照图10所示,为本发明全向视觉避障实现装置一实施例中的全向视觉避障程序的程序模块示意图,该实施例中,全向视觉避障程序可以被分割为同步触发模块10、传输模块20、第一处理模块30、第二处理模块40和设置模块50,示例性地:
同步触发模块10,用于执行发送同步触发脉冲信号任务;
传输模块20,用于执行传输信号和数据任务;
第一处理模块30,用于ISP执行第一处理;
第二处理模块40,用于主芯片执行第二处理;
设置模块50,用于设置同步触发间隔时间。
上述同步触发模块10、传输模块20、第一处理模块30、第二处理模块40和设置模块50等程序模块被执行时所实现的功能或操作步骤与上述实施例大体相同,在此不再赘述。
此外,本发明实施例还提出一种存储介质,所述存储介质为计算机可读存储介质,所述存储介质上存储有全向视觉避障程序,所述全向视觉避障程序可被一个或多个处理器执行,以实现如下操作:
步骤S10:发送触发信号到图像采集设备,以使所述采集设备采集图像信号;
步骤S20:对所述图像信号进行合并处理,以获得合并的图像数据;
步骤S30:对所述合并的图像数据进行拆解处理,以获得拆解的图像数据;
步骤S40:对所述拆解的图像数据进行视觉处理,以获得视觉图像。
本发明的存储介质具体实施方式与上述全向视觉避障方法和装置各实施例基本相同,在此不作累述。
需要说明的是,上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。并且本文中的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、装置、物品或者方法不仅包 括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、装置、物品或者方法所固有的要素。在没有更多限制的情况下,由语句“包括一……”限定的要素,并不排除在包括该要素的过程、装置、物品或者方法中还存在另外的相同要素。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是无人机、手机,计算机,服务器,或者网络设备等)执行本发明各个实施例所述的方法。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。

Claims (10)

  1. 一种全向视觉避障实现方法,其特征在于,所述全向视觉避障实现方法包括:
    步骤S10:发送触发信号到图像采集设备,以使所述采集设备采集图像信号;
    步骤S20:对所述图像信号进行合并处理,以获得合并的图像数据;
    步骤S30:对所述合并的图像数据进行拆解处理,以获得拆解的图像数据;
    步骤S40:对所述拆解的图像数据进行视觉处理,以获得视觉图像。
  2. 根据权利要求1所述的全向视觉避障实现方法,其特征在于,通过同步触发时钟发送所述触发信号至所述采集设备。
  3. 根据权利要求2所述的全向视觉避障实现方法,其特征在于,所述触发信号是脉冲信号。
  4. 根据权利要求1所述的全向视觉避障实现方法,其特征在于,所述步骤S20中,通过图像信号处理器(Image Signal Processing,ISP)对所述图像信号进行合并处理以获得合并的图像数据。
  5. 根据权利要求1所述的全向视觉避障实现方法,其特征在于,所述步骤S30中所述拆解处理包括:
    将所述合并的图像数据根据所述图像行号依次拷贝,得到拆解的图像数据;或者
    将所述合并的图像数据根据被合并图像起始地址偏移和图像宽度跨度大小进行拆解,得到拆解的图像数据。
  6. 一种全向视觉避障实现系统,其特征在于,所述全向视觉避障实现系统包括:
    同步触发时钟,用于发送所述触发信号至所述采集设备,以触发所述图像采集设备采集图像信号;
    多个ISP及主芯片,用于对所述图像信号进行合并处理以获得合并的图像数据;及
    主芯片,用于对所述合并的图像数据进行拆解处理及对所述拆解的图像数据进行视觉处理,以获得视觉图像。
  7. 根据权利要求6所述的全向视觉避障实现系统,其特征在于,所述触发信号是脉冲信号。
  8. 根据权利要求6所述的全向视觉避障实现系统,其特征在于,所述主芯片用于进行拆解处理的步骤包括:
    将所述合并的图像数据根据所述图像行号依次拷贝,得到拆解的图像数据;或者
    将所述合并的图像数据根据被合并图像起始地址偏移和图像宽度跨度大小进行拆解,得到拆解的图像数据。
  9. 一种全向视觉避障实现装置,其特征在于,所述全向视觉避障实现装置包括存储器和处理器,所述存储器上存储有可在所述处理器上运行的全向视觉避障程序,所述全向视觉避障程序被所述处理器执行时实现如权利要求1至5中任一所述的全向视觉避障实现方法的步骤。
  10. 一种存储介质,其特征在于,所述存储介质为计算机可读存储介质,所述存储介质上存储有全向视觉避障程序,所述全向视觉避障程序可被一个或者多个处理器执行,以实现如权利要求1至5中任一项所述的全向视觉避障方法的步骤。
PCT/CN2020/123317 2019-10-25 2020-10-23 全向视觉避障实现方法、系统、装置及存储介质 WO2021078268A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US17/660,504 US20220256097A1 (en) 2019-10-25 2022-04-25 Method, system and apparatus for implementing omnidirectional vision obstacle avoidance and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911024682.9A CN110933364A (zh) 2019-10-25 2019-10-25 全向视觉避障实现方法、系统、装置及存储介质
CN201911024682.9 2019-10-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/660,504 Continuation US20220256097A1 (en) 2019-10-25 2022-04-25 Method, system and apparatus for implementing omnidirectional vision obstacle avoidance and storage medium

Publications (1)

Publication Number Publication Date
WO2021078268A1 true WO2021078268A1 (zh) 2021-04-29

Family

ID=69849559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/123317 WO2021078268A1 (zh) 2019-10-25 2020-10-23 全向视觉避障实现方法、系统、装置及存储介质

Country Status (3)

Country Link
US (1) US20220256097A1 (zh)
CN (1) CN110933364A (zh)
WO (1) WO2021078268A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110933364A (zh) * 2019-10-25 2020-03-27 深圳市道通智能航空技术有限公司 全向视觉避障实现方法、系统、装置及存储介质

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103957398A (zh) * 2014-04-14 2014-07-30 北京视博云科技有限公司 一种立体图像的采样、编码及解码方法及装置
CN107026959A (zh) * 2016-02-01 2017-08-08 杭州海康威视数字技术股份有限公司 一种图像采集方法及图像采集设备
CN108234933A (zh) * 2016-12-21 2018-06-29 上海杰图软件技术有限公司 基于多路图像信号处理的实时拼接全景影像的方法及系统
US20190052815A1 (en) * 2017-08-10 2019-02-14 Altek Semiconductor Corp. Dual-camera image pick-up apparatus and image capturing method thereof
CN110009595A (zh) * 2019-04-12 2019-07-12 深圳市道通智能航空技术有限公司 一种图像数据处理方法、装置、图像处理芯片及飞行器
CN110933364A (zh) * 2019-10-25 2020-03-27 深圳市道通智能航空技术有限公司 全向视觉避障实现方法、系统、装置及存储介质

Family Cites Families (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5082209B2 (ja) * 2005-06-27 2012-11-28 株式会社日立製作所 送信装置、受信装置、及び映像信号送受信システム
US8400497B2 (en) * 2007-09-07 2013-03-19 Samsung Electronics Co., Ltd Method and apparatus for generating stereoscopic file
US8922621B2 (en) * 2007-10-19 2014-12-30 Samsung Electronics Co., Ltd. Method of recording three-dimensional image data
JP5815390B2 (ja) * 2011-12-08 2015-11-17 ルネサスエレクトロニクス株式会社 半導体装置及び画像処理方法
JP6044328B2 (ja) * 2012-12-26 2016-12-14 株式会社リコー 画像処理システム、画像処理方法およびプログラム
CN103237157B (zh) * 2013-05-13 2015-12-23 四川虹微技术有限公司 一种实时高清视频图像转置器
CN105338358B (zh) * 2014-07-25 2018-12-28 阿里巴巴集团控股有限公司 对图像进行解码的方法及装置
CN104333762B (zh) * 2014-11-24 2017-10-10 成都瑞博慧窗信息技术有限公司 一种视频解码方法
US10277813B1 (en) * 2015-06-25 2019-04-30 Amazon Technologies, Inc. Remote immersive user experience from panoramic video
US9843725B2 (en) * 2015-12-29 2017-12-12 VideoStitch Inc. Omnidirectional camera with multiple processors and/or multiple sensors connected to each processor
US10627494B2 (en) * 2016-09-16 2020-04-21 Analog Devices, Inc. Interference handling in time-of-flight depth sensing
CN108810574B (zh) * 2017-04-27 2021-03-12 腾讯科技(深圳)有限公司 一种视频信息处理方法及终端
US10152775B1 (en) * 2017-08-08 2018-12-11 Rockwell Collins, Inc. Low latency mixed reality head wearable device
WO2019151798A1 (ko) * 2018-01-31 2019-08-08 엘지전자 주식회사 무선 통신 시스템에서 이미지에 대한 메타데이터를 송수신하는 방법 및 장치
US20200153885A1 (en) * 2018-10-01 2020-05-14 Lg Electronics Inc. Apparatus for transmitting point cloud data, a method for transmitting point cloud data, an apparatus for receiving point cloud data and/or a method for receiving point cloud data
US10992860B2 (en) * 2019-03-29 2021-04-27 Nio Usa, Inc. Dynamic seam adjustment of image overlap zones from multi-camera source images

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103957398A (zh) * 2014-04-14 2014-07-30 北京视博云科技有限公司 一种立体图像的采样、编码及解码方法及装置
CN107026959A (zh) * 2016-02-01 2017-08-08 杭州海康威视数字技术股份有限公司 一种图像采集方法及图像采集设备
CN108234933A (zh) * 2016-12-21 2018-06-29 上海杰图软件技术有限公司 基于多路图像信号处理的实时拼接全景影像的方法及系统
US20190052815A1 (en) * 2017-08-10 2019-02-14 Altek Semiconductor Corp. Dual-camera image pick-up apparatus and image capturing method thereof
CN110009595A (zh) * 2019-04-12 2019-07-12 深圳市道通智能航空技术有限公司 一种图像数据处理方法、装置、图像处理芯片及飞行器
CN110933364A (zh) * 2019-10-25 2020-03-27 深圳市道通智能航空技术有限公司 全向视觉避障实现方法、系统、装置及存储介质

Also Published As

Publication number Publication date
CN110933364A (zh) 2020-03-27
US20220256097A1 (en) 2022-08-11

Similar Documents

Publication Publication Date Title
JP7237102B2 (ja) 交通灯信号の制御方法、装置、機器及び記憶媒体
US8964025B2 (en) Visual obstruction removal with image capture
KR102463304B1 (ko) 비디오 처리 방법 및 장치, 전자기기, 컴퓨터 판독 가능한 저장 매체 및 컴퓨터 프로그램
CN109218656B (zh) 图像显示方法、装置及系统
EP3989116A1 (en) Method and apparatus for detecting target object, electronic device and storage medium
US10165201B2 (en) Image processing method and apparatus and terminal device to obtain a group photo including photographer
WO2022134364A1 (zh) 车辆的控制方法、装置、系统、设备及存储介质
US20200218555A1 (en) Network Error Detection Using Virtual Reality Display Devices
CN102789336B (zh) 多屏拼接触控方法和系统
US20150294614A1 (en) Display panel driving method, driving device and display device
CN106506987B (zh) Led显示控制方法、图像拼接边缘优化方法及处理装置
JP7382298B2 (ja) 駐車処理のための方法、システム、装置及び車両コントローラ
JP7038218B2 (ja) 代表イメージの生成
WO2021078268A1 (zh) 全向视觉避障实现方法、系统、装置及存储介质
CN114981776A (zh) 一种调度硬件加速器的方法及任务调度器
JP2021179964A (ja) 画像収集設備の監視方法、装置、電子設備、記憶媒体、及びプログラム
CN102982570A (zh) 信息处理装置、信息处理系统、信息处理方法与程序
CN111698530A (zh) 视频传输方法、装置、设备及计算机可读存储介质
WO2022063014A1 (zh) 多光源摄像头设备的光源控制方法及装置、介质、终端
US20190103071A1 (en) Frame drop processing method and system for played ppt
CN104049846A (zh) 一种信息处理方法及电子设备
CN102685174B (zh) 基于大型显示系统的人机互动信息处理方法
JP2014016957A (ja) データ処理装置
JP6063056B2 (ja) ファイルトリミング方法、端末、サーバー、プログラム及び記録媒体
JP6416925B2 (ja) カメラアセンブリ、装置、プログラム、方法、およびコンピュータ可読記憶媒体

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20879110

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20879110

Country of ref document: EP

Kind code of ref document: A1