I’m doing a bunch of AI stuff that needs compiling to try various unrelated apps. I’m making a mess of config files and extras. I’ve been using distrobox and conda. How could I do this better? Chroot? Different user logins for extra home directories? Groups? Most of the packages need access to CUDA and localhost. I would like to keep them out of my main home directory.
Thanks for the read. This is what I was thinking about trying but hadn’t quite fleshed out yet. It is right on the edge of where I’m at in my learning curve. Perfect timing, thanks.
Do you have any advice when the packages are mostly python based instead of makefiles?
for python, a bunch of venvs should do it
This method should work with any command that’s installing files on your disk but it’s probably not worth the headache when virtual environments exist for python.
Python, in these instances, is being used as the installer script. As far as I can tell it involves all of the same packaging and directory issues as what make is doing. Like, most of the packages have a Python startup script that takes a text file and installs everything from it. This usually includes a pip git+address or two. So far, just getting my feet wet to try out AI has been enough for me to overlook what all is happening behind the curtain. The machine is behind an external whitelist firewall all by itself. I am just starting to get to the point where I want to dial everything in so I know exactly what is happening.
I’ve noticed a few oddball times during installations pip said something like “package unavailable; reverting to base system.” This was while it is inside conda, which itself is inside a distrobox container. I’m not sure what “base system” it might be referring to here or if this is something normal. I am probing for any potential gotchas revolving around python and containers. I imagine it is still just a matter of reading a lot of code in the installation path.