The methodology for developing optimized accelerated applications is comprised of two major phases: architecting the application, and developing the kernels. In the first phase, you make key decisions about the application architecture by determining which software functions should be accelerated onto ACAP kernels, how much parallelism can be achieved, and how to deliver it in code. In the second phase, you implement the kernels by structuring the source code, and applying the necessary build options s to create the kernel architecture needed to achieve the optimized performance target. The following examples illustrate the use of this methodology in real-world applications.
Tutorial
| Description
|
LeNet Tutorial |
This tutorial uses the LeNet algorithm to implement a system-level design to perform image classification using the AI Engine and PL logic, including block RAM (BRAM). The design demonstrates functional partitioning between the AI Engine and PL. It also highlights memory partitioning and hierarchy among DDR memory, PL (BRAM) and AI Engine memory. |
Super Sampling Rate FIR Filters |
The purpose of this tutorial is to provide a methodology to enable you to make appropriate choices depending on the filter characteristics, and to provide examples on how to implement Super Sampling Rate (SSR) FIR Filters on a Versal ACAP AI Engine processor array. |
Beamforming Design |
This tutorial demonstrates the creation of a beamforming system running on the AI Engine, PL, and PS, and the validation of the design running on this heterogeneous domain. |
AIE Emulation on Custom Platforms |
This tutorial demonstrates the creation and emulation of an AIE design including the Adaptive DataFlow (ADF) graph, RTL kernels, and a custom VCK190 platform. |
Tutorial
| Description
|
A to Z Bare-metal Flow |
This tutorial introduces a complete end to end flow for a bare-metal host application using AI Engines and PL kernels. |
Using GMIO with AIE |
This tutorial introduces the usage of global memory I/O (GMIO) for sharing data between the AI Engines and external DDR |
Runtime Parameter Reconfiguration |
Learn how to dynamically update AI Engine runtime parameters |
Packet Switching |
This tutorial illustrates how to use data packet switching with AI Engine designs to optimize efficiency. |
AI Engine Versal Integration for Hardware Emulation and Hardware |
This tutorial demonstrates creating a system design running on the AI Engine, PS, and PL and validating the design running on these heterogeneous domains by running Hardware Emulation. |
Versal System Design Clocking |
This tutorial demonstrates clocking concepts for the Vitis compiler by defining clocking for ADF graph PL kernels and PLIO kernels, using the clocking automation functionality. |
Using Floating-Point in the AI Engine |
These examples demonstrate floating-point vector computations in the AI Engine. |
DSP Library Tutorial |
This tutorial demonstrates how to use kernels provided by the DSP library for a filtering application, how to analyze the design results, and how to use filter parameters to optimize the design's performance using simulation. |
Debug Walkthrough Tutorial |
This tutorial demonstrates how to debug a multi-processor application using the Versal ACAP AI Engines, using a beamformer example design. The tutorial illustrates functional debug and performance level debug techniques. |
AI Engine DSP Library and Model Composer Tutorial |
This tutorial shows how to design AI Engine applications using Model Composer. This set of blocksets for Simulink is used to demonstrate how easy it is to develop applications for Xilinx devices, integrating RTL/HLS blocks for the Programmable Logic, as well as AI Engine blocks for the AI Engine array. |
Copyright© 2020-2021 Xilinx