-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
merge with efabless/openlane? #1
Comments
Yes. Right after I published this, I learned that @donn was working on the same thing so we joined up which led to The-OpenROAD-Project/OpenLane#652 What I'm still waiting for is an official released image from efabless with openlane+sky130 pdk |
The functionality is basically waiting for me to flick a switch- what I'm worried about is that it'll overwhelm github actions :( |
@donn maybe we could get those built using custom worker on GCP instead? https://antmicro.com/blog/2021/08/open-source-github-actions-runners-with-gcp-and-terraform/ ? (happy to help with any of the setup that needs to be done) |
alternativly I do have this cloud build recipe I've been using to build a derived notebook image, https://gist.github.com/proppy/cf84c89238a7aa3442358d4dbe009462#file-cloudbuild-yaml |
@proppy That blog post is sadly little more than exposition. I tried following it and got nowhere fast. I would indeed appreciate help with the setup, as well an actual technical document on how to use it. |
@donn I fully understand that enabling this will make CI much more cumbersome. You're calling the shots here but from my perspective I think it would be good to separate the flows for CI and releases. Users generally don't need the latest release. I'm still using OpenLANE v0.12 with a random PDK build from about year ago for most of my work because I haven't had a compelling reason to use a newer version. But I believe that an efabless-provided openlane+sky130 image would be a huge help for most people, including you. Much of the problems I see is related to installation issues and personally I'm still failing more often than not to follow the instructions because I miss to set an env var or run from the wrong directory. With an image like that, many (most?) users wouldn't ever need to touch the openlane repo or the PDKs unless they need something specific. Troublsehooting would also be much easier if people used a versioned image. "A: X doesn't work. B: Which image are you using? A: 1.63. B: Ah ok. That was fixed in 1.72" instead of keeping track of all the individual parts. I can produce an image like that and distribute but I think it really needs to come for you because we want to also be able to use this for the MPW precheck. And also, perhaps we don't need to build it at all in the image? I see there are some CI actions in the open_pdks repo https://github.com/RTimothyEdwards/open_pdks/actions so perhaps we can just get ready-built PDKs from there and put in the image. Kind of like what I do in this repo but from an upstream source |
@olofk It's not about the CI being cumbersome nor is it about me disagreeing with this strategy. Trust me, nobody more than me would like to just have the users clone and use exactly one image. It would save me a lot of pain. What it is about, plain and simple, is the sheer actual volume of data from adding a full PDK build to the GitHub image has caused far faster computers than the GitHub Actions CI to lock up. And mind you, I am not discussing the build process here, I'm discussing just the result files. There are already flows in place to build the result files, such as https://github.com/Cloud-V/sky130-builds. Problem is, that's a partial PDK build, i.e., just sky130_fd_sc_hd. So the quote-unquote "solution" here is to just have a different image for each SCL and just set sky130_fd_sc_hd to be the default. Additionally, PDK build results require a specific path, which is highly setup-dependent. Hence why this is not an official Efabless tool, rather, it's a stopgap solution I did for a research project. Adding to that complexity, for example, is that we're adding support for asap7. So, do we include all PDKs in one image? Or do we build multiple images, one for each PDK and then again once for each SCL? This is not as cut and dry as people believe. |
Thanks for clarifying. I see that I didn't have full understanding of the issue and it seems you've been through these ideas. Let's keep looking for some reasonable middle ground then. Happy to assist where I can |
So @donn, I've been doing some thinking and discussed this with @proppy. We think the best way forward could be to create a PDK manager. The thing is that we have pretty much the exactly same problem with SymbiFlow where there are currently a whole bunch of containers, each bundling the toolchain with the datafiles for a particular device. I did a quick hack a while ago that allowed users to download them on demand, cache them, pick different versions and report their path. A bit like pkg-config, but with download abilities. A user of an openlane image would then add something like For the ASIC PDKs is does actually make more sense to store them outside of a container, since they will be used by other tools like simulators and spice which aren't necessarily bundled within the container. Sounds like a plan? |
@olofk Yes! That's something I've been considering. Problem is the sky130A PDK needs to be made portable first. Currently it's tied to its install location. |
@donn, a few other ideas we discussed w/ @olofk:
Also about @donn ealier point in #1 (comment)
I'm curious how frequent it is for a given project to switch between variant of a given PDK (or between PDKs), it might make sense to optimize the default distribution for the most common usecase (as well provide the pieces for developers than need to assemble something more custom). |
Re relocatable, I just did a In libs.ref all I could find things like libs.tech was a bit more complicated. There are things like Also found a bunch of these Were those the ones you're thinking of? @proppy Re 2, I realized that the files aren't that large. At least the ones I'm looking at from the open_pdks CI artifacts are about 80MB so I think it could be perfectly fine to use pypi as distribution if that saves us some work. Re 3, I definitely think we should add FuseSoC core description files to the PDK builds so that we can easily pick up verilog models for simulation etc. I did that e.g. for the SRAM macros but I think it would make more sense to distribute the PDKs in a way that don't involve FuseSoC because I'm worried there's a fair amount of work needed in FuseSoC to make this smooth. But I might be wrong. Re havin targets in the core description files for the PDKs themselves I don't think there are any actions you can do with just the PDK. I see them more as dependencies of other things |
@olofk in parallel I'll give a shot at packaging skywater-pdk and open_pdks with conda, see: hdl/conda-eda#159 hdl/conda-eda#160 as it'll make it easy to use them within https://colab.research.google.com/ environment (which doesn't support container, see googlecolab/colabtools#299 (comment)) @mithro pointed me to https://docs.conda.io/projects/conda-build/en/latest/resources/make-relocatable.html and https://docs.conda.io/projects/conda-build/en/latest/resources/define-metadata.html#detect-binary-files-with-prefix which seems to be (at least on paper) be effectively working around RTimothyEdwards/open_pdks#60 |
Now that The-OpenROAD-Project/OpenLane#846 I'm wonder what's missing to do the same thing in the official
efabless/openlane
image for the mpw shuttle?Can we use this issue to make a list?
I'd like to help :)
The text was updated successfully, but these errors were encountered: