Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stick to Vivado 19.1 or update to Vitis 20.x (including Vivado) #114

Open
the-snowwhite opened this issue Sep 28, 2020 · 13 comments
Open

Stick to Vivado 19.1 or update to Vitis 20.x (including Vivado) #114

the-snowwhite opened this issue Sep 28, 2020 · 13 comments

Comments

@the-snowwhite
Copy link
Contributor

All the Xilinx boards should be able to utilize the New Vitis HLS style compiler (run c code on the fpga)

version 20.1 is recommendable for that use.

@cerna
Copy link
Contributor

cerna commented Sep 28, 2020

Reading through the basic documentation of the Vitis, I would say to go with the flow and latest'n'greatest. No need to stay in the past.

What I think is the biggest problem and what should be addressed is that with the shutdown of Machinekit Jenkins CI server, now there are (probably) no tests (and builds) running. Which is a big issue.

I have very lightly looked into this, however the sizes of the installations seems to be a big problem. The public runners are kind of limited in memory and space.

So which software would be better from this point of view?

@claudiolorini
Copy link

claudiolorini commented Sep 28, 2020

where i work we are doing 'the big jump' from 2019.1 dockerized to Vitis 2020.1; we had to abandon the Docker images
and centralized all the development under a (big) server. Vitis installation is HUGE.

@cerna
Copy link
Contributor

cerna commented Sep 28, 2020

@claudiolorini,
so as a small Open-Source project with no centralized entity, we are pretty much boned - as there are no public, free for open-source CI/CD services which would be able to encompass this usage.

BTW, do you per chance know how big server is the smallest one?

@the-snowwhite
Copy link
Contributor Author

the-snowwhite commented Oct 1, 2020

@cerna
docker image sizes for vivado and vitis:

vitis-bionic                     2020.1              71.4GB
bionic-vivado                    2020.1              35.9GB

the recommended ram is 16-32GB

I created the image's with a dockerfile from here:

my current docker startup is as follows:

/usr/bin/docker run --privileged --memory 16g --shm-size 1g --device /dev/snd -itv $(pwd):/work -e DISPLAY=$DISPLAY --net=host -v $HOME/.Xauthority:/home/vivado/.Xauthority -v $HOME/.Xresources:/home/vivado/.Xresources -v $HOME/.Xilinx:/home/vivado/.Xilinx -v /tftpboot:/tftpboot vitis-bionic:2020.1 /bin/bash -c 'cd /work && /tools/Xilinx/Vivado/2020.1/bin/vivado -stack 2000'

@the-snowwhite
Copy link
Contributor Author

I decided to take the simple modest way first with the fz3.

  1. First reason is that vitis 2020.1 has some quirks that are promised to be fixed in v 2020.2 that has not yet come out.
    Even though I have been able to find guides and create running demonstrations of the vector add example on both the ultra96
    and FZ3 this is the only use example I have been able to find and test out and my knowledge of openCl is almost zero, so I don't find it very useful or inspiring yet.
  2. My setups are currently relying on petalinux for the whole boot file setup and I see no (ultra96, FZ3) bsp's yet for even vivado 2020.1 if they ever will arrive
    and creating one from scratch is a lot of long winded hard work. (this may change if vitis 2020.2 is able to generate a functioning boot.bin for linux use without requiring petalinux)

@the-snowwhite
Copy link
Contributor Author

@cerna
I just uploaded my current working vivado and petalinux 2019.1 docker images to docker hub.

thesnowwhite/bionic-vivado                                                                 2019.1              bf9ef149cf91        25 hours ago        27.3GB
thesnowwhite/petalinux                                                                     2019.1              8afdee7448b0        2 days ago          23.1GB

https://hub.docker.com/repository/docker/thesnowwhite/petalinux
https://hub.docker.com/repository/docker/thesnowwhite/bionic-vivado

From what I can find of info on docker-github-actions:

Dockerfile: Building a docker of GitHub actions
can be done either by providing a public docker image
or by introducing a Docker file 
that describes how the docker image for the action should be built.

https://medium.com/better-programming/build-github-actions-using-docker-containers-c57a97be60e2

GitHub Actions v2 can pull the docker images that are published on Docker Hub

https://github.community/t/use-docker-images-from-github-package-registry/16135

https://www.docker.com/blog/docker-github-actions/

The only limitation I can see is a time limit on running a docker container (xx hour's)
I see can find no Memory or docker file size limitations on github docker actions,
So I'm willing to try to give this a shot If I can figure out the github docker actions that is ... ?

@cerna
Copy link
Contributor

cerna commented Oct 22, 2020

@the-snowwhite,

The only limitation I can see is a time limit on running a docker container (xx hour's)
I see can find no Memory or docker file size limitations on github docker actions,
So I'm willing to try to give this a shot If I can figure out the github docker actions that is ... ?

Unfortunately it is not so easy. Github Actions gives you for each job a Virtual Machine which can run for maximum of 6 hours and have parameters of Standard_DS2_v2 from Microsoft Azure, in absolute terms 2 vCPU, 7 GB of RAM and about 14 GB of SSD temporary storage. On this machine then runs the Docker daemon and containerized processes. It's not that the Docker containers live in some big cloud by own lonesomeness.

So for the numbers discussed in this thread, it is quite inadequate.

@cerna
Copy link
Contributor

cerna commented Oct 22, 2020

@the-snowwhite,
however, as far as I know, the Drone Cloud is running all Docker containers on one 128 GB machine, so if you could get away without needing the --privileged flag, it could run there.

(But I am not sure how they would look at this, maybe Machinekit organization would get a ban.)

@the-snowwhite
Copy link
Contributor Author

@cerna
I do not need the --privileged flag for any of the docker runs.
I just made some ksysguard graphs of both the vivado and petalinux runs for the latest added mpsoc board on my 32 cpu workstation and capped the memory graph to 8GB

Vivado_bitfile_run

Petalinux_bootfiles_run

Doesn't seem too bad resource wise ?

BTW: whats making you refrain from using the github v2 docker action's which can full in the images from dockerhub.com directly ?

@cerna
Copy link
Contributor

cerna commented Oct 22, 2020

@the-snowwhite,

BTW: whats making you refrain from using the github v2 docker action's which can full in the images from dockerhub.com directly ?

Nothing, really. I looked at it when it was version 1 and it was basically a nice wrapper around the Docker daemon and affiliated tools. And now I can see that they included a QEMU action, which is very nice of them. (Will have to look if it could be used for EMCApplication build [when I finally kick myself enough to finish it].)

But it is still running on the aforementioned Standard_DS2_v2 sized runner. Probably the best course of action would be to just try it. Either it will build it or won't. Nothing worse can happen.

Doesn't seem too bad resource wise ?

Hmm, 32 cores machine. Looks nice. I am probably worried about the disc space. So I will just try to download it in workflow and run some basic shell command.

@the-snowwhite
Copy link
Contributor Author

Thanks any help for getting the github docker stuff up and running for the mksocfpga repo is very very welcome as this is a bit far out of my comfort zone :-)
Btw Im ready for PR'int the FZ3 board

@the-snowwhite
Copy link
Contributor Author

@cerna
I just updated the thesnowwhite/bionic-vivado:2019.1 on dockerhub adding the board files for the ultramyir

@the-snowwhite
Copy link
Contributor Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants