Hi! I see some enthusiasm here that I love

Let's say I'm someone who's already 10,000% sold on the value of reproducible builds and reproducible interactions with package management systems (I am!) and also already 100,000% sold on the value of doing it with content addressed data and data-structures (I am!). But also a tad unfamiliar with python packaging ecosystems :) and so could use some more background and connective information about how this project hopes to impact that space specifically.

I took some time recently to experiment with poetry, hatch and packaging in response to this inquiry as I have been a user mainly of conda and pip (and some of the environment management tools downstream of pip) and of course docker as a kind of "everything" container, including its use in the python ecosystem as a way of providing blanket isolation for dependency systems, so I can briefly describe what I believe are the most popular systems for managing python app dependencies and dealing with python apps written in the most common style.

# The background, the how and why of python, setuptools, pip and pypi

First, in the end, python is just an interpreter for a programming language, and as such it's written with a fairly conservative set of capabilities for importing and using code in named 'modules' and hierarchies of modules.  From the earliest versions I'm aware of, python modules came in two flavors; either they were just python files with names that could be located through a path search, looking in the PYTHONPATH environment variable, directories containing python files and an__init__.py, or shared libraries (dll, so, dylib) on the platform python is running on that export some basic symbols allowing python to create a PyModuleDef struct for its native code.

At this time, setuptools and a standard package set for the python ecosystem didn't exist; egg and wheel, file formats used for python software distribution didn't exist and when you wanted to install a python app, you'd either get its dependencies from the operating system's package manager in case you were a wierdo running linux, (more on this later) or more commonly download and individually install each of the packages using its own installer or by having a script that created a large PYTHONPATH with all the installed packages in it before running whatever thing you were interested in.  People made some packagers in the early days that amounted to zipping up a bunch of things that could be found in the PYTHONPATH and making either a standalone exe that links libpython or a shell script that sets the path and invokes python on the right source file.

In those days people were used to heavyweight software installers that copied a lot of data, weren't used to having a lot of OS level dependencies other than shared libraries with C ABI functions exported and the licensing model of open source vs closed source was treated in many ways as fluidly as it is today, unlike many of the years in between; with electron and npm dependencies, closed source apps can exist in a middle state that depends on a lot of open source software but is treated as partly closed source, and a good many apps are distributed this way nowadays; so too in the 90s when python got its start, app distribution was often done this way.  Python programs often existed in a situation where they depended on specific versions of the ecosystem or specific exports from programs that they were intended to run in.  The "linking exception" from the LGPL also factored in; closed source programs that were driven by TCL, perl and python often shared space with properly open source libraries and open source python in this way.

For the computer environments of the era, this wasn't the worst way of doing things.  Languages and modules evolved slowly and open source software was more of an exception than the rule it is today.  setuptools, the python library used to resolve dependencies, install python modules from pypi and similar repositories and the repositories themselves, along with ways of advertising these libraries as separately installable; entities of their own didn't properly exist and few were using them that way, often picking them apart or using the day's packagers to agglomerate them into full apps.  That system though, as use of python grew, python programmers came to expect the environments they'd been using and the reach of python code itself grew, the fracturing of the python ecosystem began to become a problem for python programmers.

OS distributions at first packaged a single version of python and upgraded it periodically.  In the early days, interfaces remained consistent and feature testing in python code (similar to conventional wisdom methods of writing robust javascript code that would grow up about a decade later), were enough for python apps in the nascent ecosystem to function across those upgrades.  Libraries were a bit tougher, but OS distributions often patched in a full set of libraries for the ecosystem in their tools (rpm, deb, ebuild etc).  This worked ok for a while, but it was still very common in those days for a random python app one wanted to run to have peculiar interactions with the base system and need special setup.

OS distributions went through a phase of first giving choices, then slotting packages of all kinds (leading to the need for slotting to be woven into things that used packages downstream, something that really isn't the concern of most of those things).  Slotting is when an OS distribution provides multiple variants of a package by different names, such as allowing python 2.7 and 3.9 via the names 'python2' and 'python3'.  Since software build systems are automatic, various tricks were used to try to make slotted packages work well in an ecosystem that didn't support it, but ultimately it was a failed strategy to make library code slotted when languages and the packages themselves were opinionated about which version was the one that would be named without a version number.

It is in this era when setuptools grew up in python, having seen the success of perl's CPAN and its legacy in allowing the interchange of higher level library code among perl programmers, python gained an "indexing service" and a metadata format for package distribution that python's tools could use to download packages. The web service does a good job at what it was designed to do; make python packages easy to distribute even when one didn't have a sophisticated web service; it's based on the directory service formats of the day and normal html hyperlinking, in some ways echoing modern REST style hypermedia, but this was before Douglas Crockford formalized JSON, in an era where our expectation was more of XML and the bespoke family of angle braces schemas as a lingua franca interchange format.

Some things that are expected today, such as idempotence of uploads and hashes as part of verifying downloads were not included.  End users generally weren't using pypi by itself but it was expected to be developer experience like npm.