CN205608814U - Augmented reality system based on zynq software and hardware concurrent processing - Google Patents

Augmented reality system based on zynq software and hardware concurrent processing Download PDF

Info

Publication number
CN205608814U
CN205608814U CN201620319364.0U CN201620319364U CN205608814U CN 205608814 U CN205608814 U CN 205608814U CN 201620319364 U CN201620319364 U CN 201620319364U CN 205608814 U CN205608814 U CN 205608814U
Authority
CN
China
Prior art keywords
image
kernel module
controller
sdram
zynq
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201620319364.0U
Other languages
Chinese (zh)
Inventor
祝清瑞
汤心溢
李争
刘源
王晨
代具亭
张昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiwu Photoelectric Technology Co ltd
Original Assignee
Shanghai Institute of Technical Physics of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Institute of Technical Physics of CAS filed Critical Shanghai Institute of Technical Physics of CAS
Priority to CN201620319364.0U priority Critical patent/CN205608814U/en
Application granted granted Critical
Publication of CN205608814U publication Critical patent/CN205608814U/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The utility model discloses an augmented reality system based on zynq software and hardware concurrent processing, including zynq host processing ware, USB camera, USB control chip, DDR3SDRAM, SD card, SDRAM and VGA display, mainly used realizes the processing and the demonstration of augmented reality system. Technical scheme as follows: zynq host processing ware includes treater system and FPGA, leading -in sign image of treater system and the interior parameter of calculation USB camera, FPGA carries out the preliminary treatment to the image of USB camera collection, then pass through the total line transmission of AXI to the preliminary treatment result and go back to the treater system, calculate the outer parameter of camera to fuse virtual image and true picture in real time, at the enterprising line display of VGA display. This patent has following beneficial effect: the processing speed is very fast, the real -time is better, throughput is strong, strengthen user experience, can reduce system power dissipation, have the commonality.

Description

A kind of augmented reality system processed based on Zynq software-hardware synergism
Technical field
This patent relates to computer augmented reality field, particularly to one based on Zynq software-hardware synergism at The augmented reality system of reason.
Background technology
Augmented reality (augmented reality, AR) technology is a kind of real world information and virtual world information The new technique that " seamless " is integrated, is being originally difficult to experience in the certain time spatial dimension of real world The visual information arrived and sound etc., believed by virtual three-dimensionals such as the figure produced by computer, word and annotations Cease in the real-world scene that seamless additive fusion naturally is seen to user, thus extend human cognitive and The ability in the perception world.
The framework of traditional embedded augmented reality processing system is as follows: camera collection real world images, Arm processor carries out pretreatment to real world images, is then identified identification, three-dimensional registration and void Real fusion, the image after finally rendering is sent to display and shows in real time.Due to arm processor It is that serial performs processing routine, slower to the step process speed such as gray processing and rim detection, it is not easy to do To real time processed images, its real-time is bad, and disposal ability is poor, affects Consumer's Experience, and program is excessive, System power dissipation is relatively big, may be only available for certain occasion, does not have versatility.
Summary of the invention
This patent to solve the technical problem that and to be, the above-mentioned processing speed for prior art is relatively slow, in real time Property is bad, disposal ability is poor, affect Consumer's Experience, system power dissipation more greatly, does not have the defect of versatility, There is provided that a kind of processing speed is very fast, real-time preferably, disposal ability is relatively strong, strengthen Consumer's Experience, can reduce System power dissipation, have versatility based on Zynq software-hardware synergism process augmented reality system.
This patent solves its technical problem and be the technical scheme is that structure is a kind of based on Zynq software and hardware The collaborative augmented reality system processed, described Zynq primary processor includes processor system and FPGA, institute Stating processor system and FPGA to be connected by high speed AXI bus, described processor system includes at ARM Reason device and DDR3 controller, also include four AXI_HP interfaces, four AXI_GP interfaces and one AXI_ACP interface, described FPGA includes sdram controller IP kernel module, vga controller IP kernel module and Image semantic classification IP kernel module, described USB control chip images with described USB Head connects, and described USB control chip is also connected with described arm processor, described DDR3 SDRAM Be connected with described arm processor by described DDR3 controller, described DDR3 controller also by DMA transfer passage connects described high speed AXI bus, and described SD card is connected with described arm processor, Described sdram controller IP kernel module is connected with described SDRAM, described sdram controller IP kernel module connects described high speed AXI bus, described figure also by video direct memory transmission channel As input and the outfan of pretreatment IP kernel module are all connected by video direct memory transmission channel Described high speed AXI bus, described vga controller IP kernel module is connected with described VGA display, Described vga controller IP kernel module connects described high speed also by video direct memory transmission channel AXI bus.
This patent further relate to a kind of utilize above-mentioned based on Zynq software-hardware synergism process augmented reality system enter The method of row augmented reality, comprises the steps:
Step 1: store the file needed for linux system starts in SD card, by main for described Zynq process The Starting mode of device is set to SD card start-up, and power on self-starting linux system, writes and operation image is pre- Process the driving of IP kernel module, the driving of vga controller IP kernel module and sdram controller The driver of IP kernel module, according to the physical address of the corresponding IP kernel module that Vivado software gives, Write the Kernel Driver for physical address is operated, run based on OpenCV for mutual Control program is shown with the Qt of display;
Step 2: use the gridiron pattern image that described USB camera collection is given, use taking the photograph of OpenCV As described USB camera is demarcated by head calibrating procedure, it is calculated the internal reference of described USB camera Number, selects identification image in Qt shows control program and imports in described DDR3 SDRAM, calculates described The Hamming code information of identification image, and it is stored in described SDRAM by video direct memory transmission channel In.
Step 3: utilize OpenGL integrated for OpenCV to generate the three-dimensional void corresponding with described identification image Plan information, and be sent to described SDRAM by video direct memory transmission channel and store;
Step 4: the original image in USB camera described in described arm processor Real-time Collection, and lead to Cross video direct memory transmission channel to be transmitted to described FPGA and cache;
Step 5: use Vivado HLS software programming Image semantic classification IP kernel module, and to described former Beginning image carries out Image semantic classification and obtains after-treatment image;Described Image semantic classification includes image is carried out ash Degree converts, utilizes Threshold segmentation to carry out binary conversion treatment, contour detecting, carry out polygon to the profile detected Shape is approached, and finds the tetragon close with described identification image as candidate identification region, records described candidate The corner location of identified areas;
Step 6: pass described after-treatment image back described ARM through video direct memory transmission channel Processor, the augmented reality writing OpenCV based on integrated OpenGL under linux system processes journey Sequence, and recover the front view of mark in described original image, in candidate identification region described in identification step 5 Special identifier, and carry out pose estimation to identifying candidate identification region described in the step 5 of special identifier, Outer parameter to USB camera;The outer parameter of described USB camera includes spin matrix and translation vector;
Step 7: for identifying candidate identification region described in the step 5 of special identifier, utilize video direct Memorizer transmission channel imports the three-dimensional information of correspondence from described SDRAM, and according to step 2 institute State the outer parameter described in the intrinsic parameter of USB camera and step 6, by corresponding virtual three-dimensional information and institute State original image to merge, obtain the image of virtual reality fusion;
Step 8: the image of virtual reality fusion described in step 7 is transmitted by video direct memory transmission channel To vga controller IP kernel module, described vga controller IP kernel module controls VGA display Show.
Carry out in the above-mentioned augmented reality system processed based on Zynq software-hardware synergism that utilizes described in this patent In the method for augmented reality, the concrete steps of described step 5 include:
5-1) in Vivado HLS software, write Image semantic classification IP kernel module program, described FPGA The image of middle caching is converted into the image of Mat type;
5-2) image of Mat type is converted into single pass gray level image by three-channel coloured image;
5-3) utilize thresholding method that described single pass gray level image is carried out binary conversion treatment, obtain two Value image;
5-4) described binary image is carried out contour detecting, obtain comprising the image of polygonal profile;
5-5) utilizing approximate polygon method that polygonal profile carries out polygonal segments, getting rid of is not tetragon Polygonal profile region;
5-6) calculate the corner location in candidate identification region, and corner location is saved in described original image The end of data, as candidate identification position data;
5-7) utilize Vivado HLS software that program image pretreatment IP kernel module program is carried out streamline Optimize, processing speed and the resource taken are optimized, produce rtl code, and be packaged in IP Core module.
Carry out in the above-mentioned augmented reality system processed based on Zynq software-hardware synergism that utilizes described in this patent In the method for augmented reality, the concrete steps of described step 6 include:
6-1) described after-treatment image is sent back ARM process by video direct memory transmission channel Device, carries out perspective transform to each candidate identification region, obtains the square view in candidate identification region;
6-2) use Otsu algorithm that described candidate identification region is carried out binary conversion treatment, remove gray-scale pixels, Leave behind monochrome pixels;
6-3) calculate the Hamming code information of the square view interior zone in described candidate identification region, and count Calculate itself and the Hamming distances of the Hamming code information of the identification image of storage in SDRAM, described candidate identification 90 degree the most clockwise or counterclockwise of region, double counting Hamming distances, if current minimum hamming Distance is 0, then current candidate identified areas is a correct identified areas;
After 6-4) finding described correct identified areas, call OpenCV function and search by sub-pixel precision Corner location;
6-5) according to intrinsic parameter and the corner location in candidate identification region of described USB camera, call The function of OpenCV calculates the outer parameter of described USB camera.
Implement this patent based on Zynq software-hardware synergism process augmented reality system and method, have with Lower beneficial effect: owing to using Zynq primary processor, USB camera, USB control chip, DDR3 SDRAM, SD card, SDRAM and VGA display, Zynq primary processor include processor system and FPGA, processor system and FPGA are connected by high speed AXI bus, and processor system includes ARM Processor and DDR3 controller, FPGA is integrated with 28nm low-power consumption FPGA, and inside comprises sheet Upper high speed AXI bus, substantially increases processing speed, reduces hardware design complexity, soft owing to using Hardware is collaborative to be processed, and arm processor and FPGA share different process tasks, collaborative work, so Just can improve system treatment effeciency, reduce power consumption, the versatility making system is more preferable, uses SDRAM to preserve Need the Hamming code information of the identification image of identification and corresponding three-dimensional information, in many mark identifications and void During real fusion, it is more quick, and user experience is more preferable rapidly, so its processing speed is very fast, real-time Preferably, disposal ability is relatively strong, strengthen Consumer's Experience, can reduce system power dissipation, have versatility.
Accompanying drawing explanation
In order to be illustrated more clearly that this patent embodiment or technical scheme of the prior art, below will be to enforcement In example or description of the prior art, the required accompanying drawing used is briefly described, it should be apparent that, describe below In accompanying drawing be only some embodiments of this patent, for those of ordinary skill in the art, do not paying On the premise of going out creative work, it is also possible to obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 is augmented reality system and method one enforcement that the present invention processes based on Zynq software-hardware synergism The software and hardware architecture block diagram of the system in example;
Fig. 2 is the flow chart of method in described embodiment;
Fig. 3 is to run (SuSE) Linux OS in described embodiment on arm processor, it is achieved each peripheral hardware With the driving of Hardware I P kernel module, Qt is utilized to realize the concrete stream for graphical interfaces that is mutual and that show Cheng Tu;
Fig. 4 is use Vivado HLS software programming Image semantic classification IP kernel module in described embodiment, And original image is carried out Image semantic classification obtain the particular flow sheet of after-treatment image;
Fig. 5 is the particular flow sheet of the outer parameter calculating USB camera in described embodiment.
Detailed description of the invention
Below in conjunction with the accompanying drawing in this patent embodiment, the technical scheme in this patent embodiment is carried out clearly Chu, be fully described by, it is clear that described embodiment be only a part of embodiment of this patent rather than Whole embodiments.Based on the embodiment in this main there, those of ordinary skill in the art are not making wound The every other embodiment obtained under the property made work premise, broadly falls into the scope of this patent protection.
In the augmented reality system and method embodiment that this patent software-hardware synergism based on Zynq processes, The software and hardware architecture block diagram of the augmented reality system that its software-hardware synergism based on Zynq processes is as shown in Figure 1. In Fig. 1, should software-hardware synergism based on Zynq process augmented reality system include Zynq primary processor, USB camera, USB control chip, DDR3 SDRAM, SD card, SDRAM and VGA show Device, in the present embodiment, that Zynq primary processor is selected is Xilinx Zynq-7030-FBG484, this enforcement In example, this Zynq primary processor includes processor system and FPGA, above-mentioned processor system and FPGA Being connected by high speed AXI bus, this processor system includes arm processor and DDR3 controller, also Including four AXI_HP interfaces, four AXI_GP interfaces and an AXI_ACP interface, AXI_HP Interface is used for providing the high band wide data path of direct memory access pattern, AXI_GP interface and high speed AXI bus connects, and AXI_GP interface is used for realizing arm processor and the transmission of FPGA control command, AXI_ACP interface is connected with high speed AXI bus, and AXI_ACP interface is for accessing ARM as FPGA The low delay path of the caching of processor.This FPGA includes sdram controller IP kernel module, VGA Controller IP kernel module and Image semantic classification IP kernel module.
In this example, USB control chip is connected with USB camera, USB control chip also with ARM at Reason device connects, and in the present embodiment, that USB driving chip is selected is the TUSB1210, this USB of TI company Driving chip is the USB driving chip of a support OTG, supports USB2.0 agreement comprehensively, supports complete Portion's USB device.
In the present embodiment, described DDR3 SDRAM passes through at described DDR3 controller and described ARM Reason device connects, and DDR3 controller stores, for controlling DDR3 SDRAM, the image that USB camera gathers, DDR3 controller connects high speed AXI bus also by DMA transfer passage, so can accelerate disk read-write Speed, improves message transmission rate, it is worth mentioning at this point that, in the present embodiment, DDR3 SDRAM selects Be two panels MT41K128M16JT-125-K, a width of 32 of data bus bit, total capacity is 512MB, Linux system can be run as the internal memory of arm processor.
In the present embodiment, SD card is connected with arm processor, be used for storing linux system startup file and Needing the identification image identified, in the present embodiment, what SD card was selected is the SD card of the 16GB of Jin Shidun, File system is FAT32, stores Linux startup file, stores identification image to be identified simultaneously, When system is run, preserve the nominal data obtained by USB camera is demarcated.
In the present embodiment, sdram controller IP kernel module is connected with SDRAM, and SDRAM controls The Hamming code information of identification image that device IP kernel module needs to identify for controlling SDRAM storage and right The three-dimensional information answered, it is worth mentioning at this point that, in the present embodiment, that SDRAM selects is micron The MT48LC8M32B2TG of company, this SDRAM are the SDRAM of a 32, memory capacity For 256M, this SDRAM as identification image and the cache module of three-dimensional information.In the present embodiment, It is total that sdram controller IP kernel module connects high speed AXI also by video direct memory transmission channel Line.
In the present embodiment, input and the outfan of Image semantic classification IP kernel module are all direct by video Memorizer transmission channel connects high speed AXI bus, and Image semantic classification IP kernel module is for imaging USB The image that head gathers carries out greyscale transformation successively, utilizes threshold value to carry out binary conversion treatment, contour detecting to inspection The profile measured carries out polygonal segments.It is noted that in the present embodiment, utilize higher synthesis instrument Vivado HLS, it is not necessary to write rtl code, can realize Image semantic classification IP kernel module, so Just shortening the construction cycle, be more conducive to safeguard and transplant, motility is preferable.
In the present embodiment, vga controller IP kernel module is connected with VGA display, vga controller Image after virtual reality fusion is shown by IP kernel module for controlling VGA display, vga controller IP kernel module connects high speed AXI bus also by video direct memory transmission channel.In the present embodiment, The video format of VGA display the highest support 720p@60Hz.
In the present embodiment, FPGA is integrated with 28nm low-power consumption FPGA, and inside comprises high speed on sheet AXI bus, substantially increases processing speed, reduces hardware design complexity, owing to using software-hardware synergism Processing, arm processor and FPGA share different process tasks, collaborative work, thus can improve System treatment effeciency, reduces power consumption, and the versatility making system is more preferable, and using SDRAM to preserve needs to identify The Hamming code information of identification image and corresponding three-dimensional information, identify and virtual reality fusion in many marks Time, it is more quick, and user experience is more preferable rapidly, so its processing speed is very fast, real-time preferable, Disposal ability is relatively strong, strengthen Consumer's Experience, can reduce system power dissipation, have versatility.
In the present embodiment, kernel spacing software includes: bootstrap Boot loader, linux kernel with drive Dynamic, board suppot package controls with the driving of Image semantic classification IP, the driving of sdram controller and VGA The driving of device.User's space software include augmented reality application program based on OpenCV and using Qt as Aobvious control interface.
The present embodiment further relates to a kind of augmented reality system realizing the process of above-mentioned software-hardware synergism based on Zynq The method that system carries out augmented reality, the flow chart of the method is as shown in Figure 2.In Fig. 2, the method include as Lower step:
Step 1 runs (SuSE) Linux OS on arm processor, it is achieved in each peripheral hardware and Hardware I P The driving of core module, utilizes Qt to realize for the mutual and graphical interfaces of display: in this step, after refinement Concrete steps flow chart, will as it is shown on figure 3, store the file needed for linux system starts in SD card The Starting mode of Zynq primary processor is set to SD card start-up, and power on self-starting linux system, writes also The driving of operation image pretreatment IP kernel module, the driving of vga controller IP kernel module and SDRAM The driver of controller IP kernel module, according to the thing of the corresponding IP kernel module that Vivado software gives Reason address, writes the Kernel Driver for operating physical address, it is achieved the behaviour to physical address Making, run Qt based on OpenCV and show control program, this Qt shows control program for mutual and display.
Step 2 utilizes USB camera to gather gridiron pattern image, images USB in arm processor Head is demarcated, and calculates the intrinsic parameter of USB camera, imports one or more identification image and deposited Store up in DDR3 SDRAM, calculate the Hamming code information of identification image and be stored in SDRAM: In this step, use the gridiron pattern image that described USB camera collection is given, use taking the photograph of OpenCV As described USB camera is demarcated by head calibrating procedure, it is calculated the internal reference of described USB camera Number, selects identification image in Qt shows control program and imports in described DDR3 SDRAM, calculates described The Hamming code information of identification image, and it is stored in described SDRAM by video direct memory transmission channel In.
Step 3 utilizes OpenGL integrated for OpenCV to generate the three-dimensional letter corresponding with identification image Breath, and be sent to SDRAM by video direct memory transmission channel and store: in this step, Utilize integrated for OpenCV OpenGL to generate the three-dimensional information corresponding with identification image, and by this three Dimension virtual information is sent to SDRAM by video direct memory transmission channel and stores.
Original image in step 4ARM processor Real-time Collection USB camera, and direct by video Memorizer transmission channel is transmitted to FPGA and caches: in this step, arm processor is adopted in real time Original image (the gridiron pattern image i.e. gathered) in collection USB camera, and by video direct memory Transmission channel is transmitted to FPGA and caches.
Step 5 uses Vivado HLS software programming Image semantic classification IP kernel module, and to original image Carry out Image semantic classification and obtain after-treatment image: in this step, use Vivado HLS software programming figure As pretreatment IP kernel module, and original image is carried out Image semantic classification obtain after-treatment image, value Obtaining one to be mentioned that, Image semantic classification includes image carrying out greyscale transformation, utilizing Threshold segmentation to carry out binaryzation Process, contour detecting, the profile detected is carried out polygonal segments, find four close with identification image Limit shape is as candidate identification region, the corner location in record candidate identification region.
Step 6 passes after-treatment image back arm processor through video direct memory transmission channel, Write the augmented reality processing routine of OpenCV based on integrated OpenGL under linux system, and recover The front view of mark in original image, the special identifier in candidate identification region described in identification step 5, and right Identify candidate identification region described in the step 5 of special identifier and carry out pose estimation, obtain USB camera Outer parameter: in this step, pass after-treatment image back ARM through video direct memory transmission channel Processor, the augmented reality writing OpenCV based on integrated OpenGL under linux system processes journey Sequence, and recover in original image the front view of mark, special in candidate identification region described in identification step 5 Mark, and carry out pose estimation to identifying candidate identification region described in the step 5 of special identifier, obtain The outer parameter of USB camera.In the present embodiment, the outer parameter of USB camera includes spin matrix peace The amount of shifting to.
Step 7 is for identifying candidate identification region described in the step 5 of special identifier, from SDRAM Extract corresponding three-dimensional information, and according to the intrinsic parameter of USB camera and outer parameter, by corresponding Virtual three-dimensional information merges with original image, obtains the image of virtual reality fusion: in this step, for knowledge Do not go out candidate identification region described in the step 5 of special identifier, utilize video direct memory transmission channel from institute State and SDRAM imports the three-dimensional information corresponding with the described identification image identifying special identifier, and Intrinsic parameter according to described USB camera and outer parameter, by original with described for corresponding virtual three-dimensional information Image merges, and obtains the image of virtual reality fusion.
The image of the virtual reality fusion described in step 7 is passed by step 8 by video direct memory transmission channel Being passed to vga controller IP kernel module, vga controller IP kernel module controls VGA display and enters Row display: in this step, write the program of vga controller IP kernel module in Vivado, by void The real image merged is transferred to vga controller IP kernel module by video direct memory transmission channel, Vga controller IP kernel module controls VGA display and shows.
The present embodiment runs linux system first with arm processor, gathers the original of USB camera Image caches, and demarcates USB camera.Then FPGA is utilized to use Vivado HLS The program of software programming hardware-accelerated Image semantic classification IP kernel module, detects candidate identification position.Then Utilize arm processor to write augmented reality program based on OpenCV, identify candidate identification, complete three Dimension registration, and carry out virtual reality fusion.FPGA is finally utilized to realize driving of vga controller IP kernel module Dynamic program, shows in real time.The present embodiment utilizes arm processor+FPGA architecture to carry out software and hardware connection Closing design, it significantly improves the real-time of image processing algorithm, reduces the complicated journey of conventional hardware framework Degree and development cost, the design of User IP kernel module with integrated the simplest and the most direct flexibly, it has low in energy consumption With performance high.
For the present embodiment, above-mentioned steps 5 also can refine further, the flow chart such as figure after its refinement Shown in 4.In Fig. 4, above-mentioned steps 5 farther includes:
Step 5-1 writes Image semantic classification IP kernel module program in Vivado HLS software, FPGA The image of middle caching is converted into the image of Mat type: in this step, writes in Vivado HLS software Image semantic classification IP kernel module program, is integrated with the storehouse of class OpenCV in Vivado HLS software, In FPGA, the image of caching is converted into the image of Mat type.
Step 5-2 is converted into single pass gray-scale map the image of Mat type by three-channel coloured image Picture: in this step, is converted into single pass gray-scale map the image of Mat type by three-channel coloured image Picture.
Step 5-3 utilizes thresholding method that single pass gray level image is carried out binary conversion treatment, obtains two-value Change image: in this step, utilize thresholding method that single pass gray level image is carried out binary conversion treatment, To binary image.
Step 5-4 carries out contour detecting to binary image, obtains comprising the image of polygonal profile: this step In Zhou, binary image is carried out contour detecting, obtain comprising the image of polygonal profile.
Step 5-5 utilizes approximate polygon method that polygonal profile carries out polygonal segments, and getting rid of is not four limits The polygonal profile region of shape: in this step, utilizes approximate polygon method that polygonal profile is carried out polygon Approaching, eliminating is not the polygonal profile region of tetragon.
Step 5-6 calculates the corner location in candidate identification region, and corner location is saved in original image The end of data, as candidate identification position data: in this step, calculates the corner location in candidate identification region, And corner location is saved in the end of data of original image, as candidate identification position data.
Step 5-7 utilizes Vivado HLS software to flow program image pretreatment IP kernel module program Waterline optimizes, and is optimized processing speed and the resource taken, and produces rtl code, and is packaged into IP kernel module: in this step, utilizes Vivado HLS software to program image pretreatment IP kernel module Program carries out streamline optimization, i.e. image is carried out greyscale transformation, utilize Threshold segmentation carry out binary conversion treatment, Contour detecting, the profile detected carried out polygonal segments etc. carry out streamline optimization, to processing speed and The resource taken is optimized, and produces rtl code, and is packaged into IP kernel module.
For the present embodiment, above-mentioned steps 6 also can refine further, the flow chart such as figure after its refinement Shown in 5.In Fig. 5, above-mentioned steps 6 farther includes:
After-treatment image is sent back ARM process by video direct memory transmission channel by step 6-1 Device, carries out perspective transform to each candidate identification region, obtains the square view in candidate identification region: In the present embodiment, after-treatment image comprises candidate identification regional location, in this step, by after-treatment figure As sending back arm processor by video direct memory transmission channel, to each candidate identification region Carry out perspective transform, obtain the square view in candidate identification region.
Step 6-2 uses Otsu algorithm that candidate identification region is carried out binary conversion treatment, removes gray-scale pixels, Leave behind monochrome pixels: in this step, use Otsu algorithm that candidate identification region is carried out binary conversion treatment, Remove gray-scale pixels, leave behind monochrome pixels.
Step 6-3 calculates the Hamming code information of the square view interior zone in candidate identification region, and calculates Its with SDRAM in the Hamming distances of Hamming code information of identification image of storage, candidate identification region is depended on Secondary 90 degree clockwise or counterclockwise, double counting Hamming distances, if current minimum Hamming distances is 0, Then current candidate identified areas is a correct identified areas: in this step, calculates candidate identification region The Hamming code information of square view interior zone, and calculate itself and the identification image of storage in SDRAM The Hamming distances of Hamming code information, 90 degree the most clockwise or counterclockwise of candidate identification region, weight Calculate Hamming distances again, if current minimum Hamming distances is 0, then current candidate identified areas be one just True identified areas.
After step 6-4 finds correct identified areas, call OpenCV function and search angle by sub-pixel precision Point position: in this step, after finding correct identified areas, call OpenCV function by sub-pixel precision Search corner location, obtain accurate corner location.
Step 6-5, according to the intrinsic parameter of USB camera and the corner location in candidate identification region, is called The function of OpenCV calculates the outer parameter of USB camera: in this step, according in USB camera Parameter and the corner location in candidate identification region, the function calling OpenCV calculates outside USB camera Parameter, the outer parameter of USB camera includes spin matrix and translation vector.
In a word, in the present embodiment, this patent, based on arm processor, is auxiliary with FPGA, builds The augmented reality system processed based on Zynq software-hardware synergism that one software and hardware association processes, should be based on The augmented reality system that Zynq software-hardware synergism processes presses software and hardware structure flexible partition program module, simultaneously Applying high speed AXI bus in sheet, it can improve throughput, reduces power consumption, and real-time is preferable, locates in real time Reason ability is strong.Identifying processing speed should can be accelerated based on the augmented reality system that Zynq software-hardware synergism processes, And then improve the accuracy of identification of identifying processing and stability, allow users to timely and accurately obtain and reality The prefabricated virtual information that information is mated most, and real-time display is on VGA display, promotes use further Family is experienced.
The foregoing is only presently preferred embodiments of the present invention, not in order to limit this patent, all special at this Within the spirit of profit and principle, any modification, equivalent substitution and improvement etc. made, should be included in this specially Within the protection domain of profit.

Claims (1)

1. the augmented reality system that software-hardware synergism based on Zynq processes, including the main process of Zynq Device, USB camera, USB control chip, DDR3SDRAM, SD card, SDRAM and VGA Display, it is characterised in that:
Described Zynq primary processor includes that processor system and FPGA, processor system and FPGA are by height Speed AXI bus connects, and described processor system includes arm processor and DDR3 controller, four AXI_HP interface, four AXI_GP interfaces and an AXI_ACP interface, described FPGA includes Sdram controller IP kernel module, vga controller IP kernel module and Image semantic classification IP kernel Module;
Described USB camera is connected with described USB control chip, and described USB control chip is with described Arm processor connects, and described DDR3SDRAM is by described DDR3 controller and described ARM Processor connects, and described DDR3 controller connects described high speed AXI bus also by DMA transfer passage, Described SD card is connected with described arm processor, and described sdram controller IP kernel module is with described SDRAM connects, and described sdram controller IP kernel module is logical also by the transmission of video direct memory Road connects described high speed AXI bus, and input and the outfan of described Image semantic classification IP kernel module are equal Described high speed AXI bus, described vga controller IP is connected by video direct memory transmission channel Kernel module is connected with described VGA display, and described vga controller IP kernel module is also by video Direct memory transmission channel connects described high speed AXI bus.
CN201620319364.0U 2016-04-15 2016-04-15 Augmented reality system based on zynq software and hardware concurrent processing Expired - Fee Related CN205608814U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201620319364.0U CN205608814U (en) 2016-04-15 2016-04-15 Augmented reality system based on zynq software and hardware concurrent processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201620319364.0U CN205608814U (en) 2016-04-15 2016-04-15 Augmented reality system based on zynq software and hardware concurrent processing

Publications (1)

Publication Number Publication Date
CN205608814U true CN205608814U (en) 2016-09-28

Family

ID=56972252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201620319364.0U Expired - Fee Related CN205608814U (en) 2016-04-15 2016-04-15 Augmented reality system based on zynq software and hardware concurrent processing

Country Status (1)

Country Link
CN (1) CN205608814U (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844654A (en) * 2016-04-15 2016-08-10 中国科学院上海技术物理研究所 Augmented reality system and method based on Zynq software and hardware coprocessing
CN109246331A (en) * 2018-09-19 2019-01-18 郑州云海信息技术有限公司 A kind of method for processing video frequency and system
CN109348124A (en) * 2018-10-23 2019-02-15 Oppo广东移动通信有限公司 Image transfer method, device, electronic equipment and storage medium
CN111028231A (en) * 2019-12-27 2020-04-17 易思维(杭州)科技有限公司 Workpiece position acquisition system based on ARM and FPGA
CN111260084A (en) * 2020-01-09 2020-06-09 长安大学 Remote system and method based on augmented reality collaborative assembly maintenance
CN114267337A (en) * 2022-03-02 2022-04-01 合肥讯飞数码科技有限公司 Voice recognition system and method for realizing forward operation

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105844654A (en) * 2016-04-15 2016-08-10 中国科学院上海技术物理研究所 Augmented reality system and method based on Zynq software and hardware coprocessing
CN109246331A (en) * 2018-09-19 2019-01-18 郑州云海信息技术有限公司 A kind of method for processing video frequency and system
CN109348124A (en) * 2018-10-23 2019-02-15 Oppo广东移动通信有限公司 Image transfer method, device, electronic equipment and storage medium
CN109348124B (en) * 2018-10-23 2021-06-11 Oppo广东移动通信有限公司 Image transmission method, image transmission device, electronic equipment and storage medium
CN111028231A (en) * 2019-12-27 2020-04-17 易思维(杭州)科技有限公司 Workpiece position acquisition system based on ARM and FPGA
CN111260084A (en) * 2020-01-09 2020-06-09 长安大学 Remote system and method based on augmented reality collaborative assembly maintenance
CN111260084B (en) * 2020-01-09 2024-03-15 长安大学 Remote system and method based on augmented reality cooperative assembly maintenance
CN114267337A (en) * 2022-03-02 2022-04-01 合肥讯飞数码科技有限公司 Voice recognition system and method for realizing forward operation

Similar Documents

Publication Publication Date Title
CN205608814U (en) Augmented reality system based on zynq software and hardware concurrent processing
CN105844654A (en) Augmented reality system and method based on Zynq software and hardware coprocessing
US20210279503A1 (en) Image processing method, apparatus, and device, and storage medium
Liu et al. Masc: Multi-scale affinity with sparse convolution for 3d instance segmentation
CN104881666B (en) A kind of real-time bianry image connected component labeling implementation method based on FPGA
JP7073247B2 (en) Methods for generating lane boundary detection models, methods for detecting lane boundaries, devices for generating lane boundary detection models, devices for detecting lane boundaries, equipment, computers readable Storage media and computer programs
JP6871314B2 (en) Object detection method, device and storage medium
Pauwels et al. A comparison of FPGA and GPU for real-time phase-based optical flow, stereo, and local image features
WO2023185785A1 (en) Image processing method, model training method, and related apparatuses
CN107766812B (en) MiZ 702N-based real-time face detection and recognition system
US20200250545A1 (en) Split network acceleration architecture
CN109683877B (en) SystemC-based GPU software and hardware interaction TLM system
CN111145215B (en) Target tracking method and device
WO2019222889A1 (en) Image feature extraction method and device
CN107077186A (en) Low-power is calculated as picture
CN105556503A (en) Dynamic memory control method and system thereof
CN108270968A (en) A kind of infrared and visual image fusion detection system and method
CN110400362A (en) A kind of ABAQUS two dimension crack modeling method, system and computer readable storage medium based on image
JP6128617B2 (en) Image recognition apparatus and program
CN101567078B (en) Dual-bus visual processing chip architecture
Liang et al. The design of objects bounding boxes non-maximum suppression and visualization module based on FPGA
WO2023202367A1 (en) Graphics processing unit, system, apparatus, device, and method
CN107506773A (en) A kind of feature extracting method, apparatus and system
WO2023109086A1 (en) Character recognition method, apparatus and device, and storage medium
CN204131656U (en) Be applied to the assistant images processing unit of augmented reality system

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20170522

Address after: 201821, Shanghai Jiading Industrial Zone, green road, No. 2398, comprehensive experimental building 1, 1, the ground floor, Jiading District

Patentee after: SHANGHAI JIWU PHOTOELECTRIC TECHNOLOGY CO.,LTD.

Address before: 200083 Yutian Road, Shanghai, No. 500, No.

Patentee before: SHANGHAI INSTITUTE OF TECHNICAL PHYSICS, CHINESE ACADEMY OF SCIENCE

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160928