Python Project Setup 2026: uv + Ruff + Ty + Polars
This one simple Python stack will make your projects faster, cleaner, and easier to maintain.

Image by Editor
# Introduction
Python project setup used to mean making a dozen small decisions before you wrote your first useful line of code. Which environment manager? Which dependency tool? Which formatter? Which linter? Which type checker? And if your project touched data, were you supposed to start with pandas, DuckDB, or something newer?
In 2026, that setup can be much simpler.
For most new projects, the cleanest default stack is:
- uv for Python installation, environments, dependency management, locking, and command running.
- Ruff for linting and formatting.
- Ty for type checking.
- Polars for dataframe work.
This stack is fast, modern, and notably coherent. Three of the four tools (uv, Ruff, and Ty) actually come from the same company, Astral, which means they integrate seamlessly with each other and with your pyproject.toml.
# Understanding Why This Stack Works
Older setups often looked like this:
pyenv + pip + venv + pip-tools or Poetry + Black + isort + Flake8 + mypy + pandas
This worked, but it created significant overlap, inconsistency, and maintenance overhead. You had separate tools for environment setup, dependency locking, formatting, import sorting, linting, and typing. Every new project started with a choice explosion. The 2026 default stack collapses all of that. The end result is fewer tools, fewer configuration files, and less friction when onboarding contributors or wiring up continuous integration (CI). Before jumping into setup, let’s take a quick look at what each tool in the 2026 stack is doing:
- uv: This is the base of your project setup. It creates the project, manages versions, handles dependencies, and runs your code. Instead of manually setting up virtual environments and installing packages, uv handles the heavy lifting. It keeps your environment consistent using a lockfile and ensures everything is correct before running any command.
- Ruff: This is your all-in-one tool for code quality. It is extremely fast, checks for issues, fixes many of them automatically, and also formats your code. You can use it instead of tools like Black, isort, Flake8, and others.
- Ty: This is a newer tool for type checking. It helps catch errors by checking types in your code and works with various editors. While newer than tools like mypy or Pyright, it is optimized for modern workflows.
- Polars: This is a modern library for working with dataframes. It focuses on efficient data processing using lazy execution, which means it optimizes queries before running them. This makes it faster and more memory efficient than pandas, especially for large data tasks.
# Reviewing Prerequisites
The setup is quite simple. Here are the few things you need to get started:
- Terminal: macOS Terminal, Windows PowerShell, or any Linux shell.
- Internet connection: Required for the one-time uv installer and package downloads.
- Code editor: VS Code is recommended because it works well with Ruff and Ty, but any editor is fine.
- Git: Required for version control; note that uv initializes a Git repository automatically.
That is it. You do not need Python pre-installed. You do not need pip, venv, pyenv, or conda. uv handles installation and environment management for you.
# Step 1: Installing uv
uv provides a standalone installer that works on macOS, Linux, and Windows without requiring Python or Rust to be present on your machine.
macOS and Linux:
curl -LsSf https://astral.sh/uv/install.sh | sh
Windows PowerShell:
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"
After installation, restart your terminal and verify:
uv --version
Output:
uv 0.8.0 (Homebrew 2025-07-17)
This single binary now replaces pyenv, pip, venv, pip-tools, and the project management layer of Poetry.
# Step 2: Creating a New Project
Navigate to your project directory and scaffold a new one:
uv init my-project
cd my-project
uv creates a clean starting structure:
my-project/
├── .python-version
├── pyproject.toml
├── README.md
└── main.py
Reshape it into a src/ layout, which improves imports, packaging, test isolation, and type-checker configuration:
mkdir -p src/my_project tests data/raw data/processed
mv main.py src/my_project/main.py
touch src/my_project/__init__.py tests/test_main.py
Your structure should now look like this:
my-project/
├── .python-version
├── README.md
├── pyproject.toml
├── uv.lock
├── src/
│ └── my_project/
│ ├── __init__.py
│ └── main.py
├── tests/
│ └── test_main.py
└── data/
├── raw/
└── processed/
If you need a specific version (e.g. 3.12), uv can install and pin it:
uv python install 3.12
uv python pin 3.12
The pin command writes the version to .python-version, ensuring every team member uses the same interpreter.
# Step 3: Adding Dependencies
Adding dependencies is a single command that resolves, installs, and locks simultaneously:
uv add polars
uv automatically creates a virtual environment (.venv/) if one does not exist, resolves the dependency tree, installs packages, and updates uv.lock with exact, pinned versions.
For tools needed only during development, use the --dev flag:
uv add --dev ruff ty pytest
This places them in a separate [dependency-groups] section in pyproject.toml, keeping production dependencies lean. You never need to run source .venv/bin/activate; when you use uv run, it automatically activates the correct environment.
# Step 4: Configuring Ruff (Linting and Formatting)
Ruff is configured directly inside your pyproject.toml. Add the following sections:
[tool.ruff]
line-length = 100
target-version = "py312"
[tool.ruff.lint]
select = ["E4", "E7", "E9", "F", "B", "I", "UP"]
[tool.ruff.format]
docstring-code-format = true
quote-style = "double"
A 100-character line length is a good compromise for modern screens. Rule groups flake8-bugbear (B), isort (I), and pyupgrade (UP) add real value without overwhelming a new repository.
Running Ruff:
# Lint your code
uv run ruff check .
# Auto-fix issues where possible
uv run ruff check --fix .
# Format your code
uv run ruff format .
Notice the pattern: uv run <tool> <args>. You never install tools globally or activate environments manually.
# Step 5: Configuring Ty for Type Checking
Ty is also configured in pyproject.toml. Add these sections:
[tool.ty.environment]
root = ["./src"]
[tool.ty.rules]
all = "warn"
[[tool.ty.overrides]]
include = ["src/**"]
[tool.ty.overrides.rules]
possibly-unresolved-reference = "error"
[tool.ty.terminal]
error-on-warning = false
output-format = "full"
This configuration starts Ty in warning mode, which is ideal for adoption. You fix obvious issues first, then gradually promote rules to errors. Keeping data/** excluded prevents type-checker noise from non-code directories.
# Step 6: Configuring pytest
Add a section for pytest:
[tool.pytest.ini_options]
testpaths = ["tests"]
Run your test suite with:
uv run pytest
# Step 7: Examining the Complete pyproject.toml
Here is what your final configuration looks like with everything wired up — one file, every tool configured, with no scattered config files:
[project]
name = "my-project"
version = "0.1.0"
description = "Modern Python project with uv, Ruff, Ty, and Polars"
readme = "README.md"
requires-python = ">=3.13"
dependencies = [
"polars>=1.39.3",
]
[dependency-groups]
dev = [
"pytest>=9.0.2",
"ruff>=0.15.8",
"ty>=0.0.26",
]
[tool.ruff]
line-length = 100
target-version = "py312"
[tool.ruff.lint]
select = ["E4", "E7", "E9", "F", "B", "I", "UP"]
[tool.ruff.format]
docstring-code-format = true
quote-style = "double"
[tool.ty.environment]
root = ["./src"]
[tool.ty.rules]
all = "warn"
[[tool.ty.overrides]]
include = ["src/**"]
[tool.ty.overrides.rules]
possibly-unresolved-reference = "error"
[tool.ty.terminal]
error-on-warning = false
output-format = "full"
[tool.pytest.ini_options]
testpaths = ["tests"]
# Step 8: Writing Code with Polars
Replace the contents of src/my_project/main.py with code that exercises the Polars side of the stack:
"""Sample data analysis with Polars."""
import polars as pl
def build_report(path: str) -> pl.DataFrame:
"""Build a revenue summary from raw data using the lazy API."""
q = (
pl.scan_csv(path)
.filter(pl.col("status") == "active")
.with_columns(
revenue_per_user=(pl.col("revenue") / pl.col("users")).alias("rpu")
)
.group_by("segment")
.agg(
pl.len().alias("rows"),
pl.col("revenue").sum().alias("revenue"),
pl.col("rpu").mean().alias("avg_rpu"),
)
.sort("revenue", descending=True)
)
return q.collect()
def main() -> None:
"""Entry point with sample in-memory data."""
df = pl.DataFrame(
{
"segment": ["Enterprise", "SMB", "Enterprise", "SMB", "Enterprise"],
"status": ["active", "active", "churned", "active", "active"],
"revenue": [12000, 3500, 8000, 4200, 15000],
"users": [120, 70, 80, 84, 150],
}
)
summary = (
df.lazy()
.filter(pl.col("status") == "active")
.with_columns(
(pl.col("revenue") / pl.col("users")).round(2).alias("rpu")
)
.group_by("segment")
.agg(
pl.len().alias("rows"),
pl.col("revenue").sum().alias("total_revenue"),
pl.col("rpu").mean().round(2).alias("avg_rpu"),
)
.sort("total_revenue", descending=True)
.collect()
)
print("Revenue Summary:")
print(summary)
if __name__ == "__main__":
main()
Before running, you need a build system in pyproject.toml so uv installs your project as a package. We will use Hatchling:
cat >> pyproject.toml << 'EOF'
[build-system]
requires = ["hatchling"]
build-backend = "hatchling.build"
[tool.hatch.build.targets.wheel]
packages = ["src/my_project"]
EOF
Then sync and run:
uv sync
uv run python -m my_project.main
You should see a formatted Polars table:
Revenue Summary:
shape: (2, 4)
┌────────────┬──────┬───────────────┬─────────┐
│ segment ┆ rows ┆ total_revenue ┆ avg_rpu │
│ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ u32 ┆ i64 ┆ f64 │
╞════════════╪══════╪═══════════════╪═════════╡
│ Enterprise ┆ 2 ┆ 27000 ┆ 100.0 │
│ SMB ┆ 2 ┆ 7700 ┆ 50.0 │
└────────────┴──────┴───────────────┴─────────┘
# Managing the Daily Workflow
Once the project is set up, the day-to-day loop is straightforward:
# Pull latest, sync dependencies
git pull
uv sync
# Write code...
# Before committing: lint, format, type-check, test
uv run ruff check --fix .
uv run ruff format .
uv run ty check
uv run pytest
# Commit
git add .
git commit -m "feat: add revenue report module"
# Changing the Way You Write Python with Polars
The biggest mindset shift in this stack is on the data side. With Polars, your defaults should be:
- Expressions over row-wise operations. Polars expressions let the engine vectorize and parallelize operations. Avoid user defined functions (UDFs) unless there is no native alternative, as UDFs are significantly slower.
- Lazy execution over eager loading. Use
scan_csv()instead ofread_csv(). This creates aLazyFramethat builds a query plan, allowing the optimizer to push filters down and eliminate unused columns. - Parquet-first workflows over CSV-heavy pipelines. A good pattern for internal data preparation looks like this.
# Evaluating When This Setup Is Not the Best Fit
You may want a different choice if:
- Your team has a mature Poetry or mypy workflow that is working well.
- Your codebase depends heavily on pandas-specific APIs or ecosystem libraries.
- Your organization is standardized on Pyright.
- You are working in a legacy repository where changing tools would create more disruption than value.
# Implementing Pro Tips
- Never activate virtual environments manually. Use
uv runfor everything to ensure you are using the correct environment. - Always commit
uv.lockto version control. This ensures the project runs identically on every machine. - Use
--frozenin CI. This installs dependencies from the lockfile for faster, more reliable builds. - Use
uvxfor one-off tools. Run tools without installing them in your project. - Use Ruff's
--fixflag liberally. It can auto-fix unused imports, outdated syntax, and more. - Prefer the lazy API by default. Use
scan_csv()and only call.collect()at the end. - Centralize configuration. Use
pyproject.tomlas the single source of truth for all tools.
# Concluding Thoughts
The 2026 Python default stack reduces setup effort and encourages better practices: locked environments, a single configuration file, fast feedback, and optimized data pipelines. Give it a try; once you experience environment-agnostic execution, you will understand why developers are switching.
Kanwal Mehreen is a machine learning engineer and a technical writer with a profound passion for data science and the intersection of AI with medicine. She co-authored the ebook "Maximizing Productivity with ChatGPT". As a Google Generation Scholar 2022 for APAC, she champions diversity and academic excellence. She's also recognized as a Teradata Diversity in Tech Scholar, Mitacs Globalink Research Scholar, and Harvard WeCode Scholar. Kanwal is an ardent advocate for change, having founded FEMCodes to empower women in STEM fields.