🌍 It Works on My Machine... Now What?
- Separate configuration from code (Environment Variables)
- Manage dependencies with `requirements.txt`
- Containerize applications with Docker
- Use production WSGI servers (Gunicorn) instead of Flask dev server
- Understand CI/CD concepts
os.getenv('VAR_NAME'). Never commit secrets to Git.
Flask==2.3.0). Generated with pip freeze > requirements.txt.
Development Environment: Your workshop where you build prototypes. Messy, experimental, works only for you.
Production Environment: The retail store where customers buy your product. Clean, reliable, must work for everyone.
Environment Variables: Shipping labels and instructions. Each package (server) needs its own address (database URL), but the product inside is the same.
requirements.txt: Ingredient list on food packaging. Anyone can recreate your recipe (app) by following the exact ingredient versions.
Docker Container: Shipping everything in a standardized box. Your product + packaging + instructions all in one unit that arrives intact anywhere.
Gunicorn (WSGI Server): Industrial-strength truck for delivery. The van you used in your workshop (Flask dev server) can't handle highway traffic.
Code that runs on your laptop is not ready for the world. Production code needs to be secure, scalable, and reproducible.
Never Hardcode Secrets: API keys, passwords, and database URLs go in environment variables, never in code. Use .env files locally and secrets managers in production.
Pin Your Dependencies: Use exact versions in requirements.txt (Flask==2.3.0, not Flask). Unpinned versions break apps when libraries update.
Learn Docker Basics: Dockerfile, image, container, volume. These concepts transfer to Kubernetes and cloud platforms.
Test Deployment Locally: Run your Docker container locally before pushing to production. Catch config errors early.
Monitor Everything: Production apps need logging and monitoring. If it breaks at 3 AM, you need to know why.
Goal: Create an isolated Python environment for your project and manage dependencies properly.
- Create a new virtual environment:
python -m venv venv - Activate it:
source venv/bin/activate(Mac/Linux) orvenv\Scripts\activate(Windows) - Install Flask and requests:
pip install flask requests - Generate requirements.txt:
pip freeze > requirements.txt - Deactivate and delete venv, then recreate and install from requirements.txt
- Verify all packages installed correctly with
pip list
Bonus: Add a .gitignore file that excludes the venv/ directory.
Create a deployment checklist for your Flask application:
- Document all environment variables your app requires
- Create separate
requirements.txtandrequirements-dev.txtfiles - Write a
README.mdwith setup instructions for new developers - Identify which settings should differ between dev and production
- List all external services your app depends on (databases, APIs, etc.)
🔐 Configuration & Secrets
Never commit passwords or API keys to Git!
# ❌ BAD: Hardcoding secrets
DB_PASSWORD = "my_secret_password_123"
def connect_db():
# If you push this to GitHub, hackers will find it!
print(f"Connecting with password: {DB_PASSWORD}")
# ⭐ BEST PRACTICE: Use os.environ
import os
from dotenv import load_dotenv
# Load variables from .env file (local dev only)
load_dotenv()
def connect_db():
# Get from environment, or fail if missing
password = os.getenv('DB_PASSWORD')
if not password:
raise ValueError("DB_PASSWORD not set!")
print(f"Connecting with password: {password}")
Create a file named .env containing DB_PASSWORD=secret. Add .env to your .gitignore file immediately.
Goal: Configure your Flask app to use environment variables with python-dotenv.
- Install python-dotenv:
pip install python-dotenv - Create a
.envfile with these variables:SECRET_KEY=your-super-secret-key-here DATABASE_URL=sqlite:///app.db DEBUG=True API_KEY=demo-api-key-12345
- Create a
config.pythat loads these variables usingos.getenv() - Add
.envto.gitignoreto prevent committing secrets - Create a
.env.examplefile with placeholder values for documentation - Test that your app fails gracefully when required variables are missing
Create a professional configuration system with multiple environments:
- Create a
config.pywith baseConfigclass - Add
DevelopmentConfig,TestingConfig, andProductionConfigsubclasses - Use
FLASK_ENVenvironment variable to select the config - Implement validation that raises errors for missing required settings
- Add type hints and docstrings to your configuration classes
📦 Dependencies
Ensure your server installs the exact same libraries you used in development.
# 1. Freeze current environment to file
$ pip freeze > requirements.txt
# Content of requirements.txt:
# flask==2.0.1
# requests==2.26.0
# gunicorn==20.1.0
# 2. Install on server
$ pip install -r requirements.txt
Goal: Master Python dependency management for production deployments.
- Create a fresh virtual environment and install: flask, gunicorn, python-dotenv
- Generate
requirements.txtwith pinned versions usingpip freeze - Create
requirements-dev.txtwith additional dev dependencies:-r requirements.txt pytest==7.4.0 pytest-cov==4.1.0 black==23.7.0 flake8==6.1.0
- Test installing from both files in a clean environment
- Document the purpose of each dependency in comments
Explore advanced dependency management with modern tools:
- Install and configure
pip-toolsfor dependency compilation - Create a
requirements.infile with unpinned top-level dependencies - Use
pip-compileto generate lockedrequirements.txt - Research and compare: pip-tools vs Poetry vs Pipenv
- Write a brief summary of when to use each tool
🐳 Docker Containers
Package your app, dependencies, and OS config into a single portable unit.
# Use an official Python runtime as a parent image
FROM python:3.9-slim
# Set working directory
WORKDIR /app
# Install dependencies first (caching layer)
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the code
COPY . .
# Command to run the app
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8000", "app:app"]
Goal: Containerize a Flask application with Docker and docker-compose.
- Create a simple Flask app with a
/healthendpoint that returns JSON - Write a
Dockerfileusing the template above - Add a
.dockerignorefile to exclude:venv/,.env,__pycache__/,.git/ - Build the image:
docker build -t my-flask-app . - Run the container:
docker run -p 8000:8000 my-flask-app - Verify by visiting
http://localhost:8000/health
docker-compose.yml:
version: '3.8'
services:
web:
build: .
ports:
- "8000:8000"
environment:
- FLASK_ENV=production
- SECRET_KEY=${SECRET_KEY}
volumes:
- ./logs:/app/logs
redis:
image: redis:alpine
ports:
- "6379:6379"
Optimize your Docker image with multi-stage builds:
- Create a multi-stage Dockerfile with a build stage and production stage
- Use
python:3.9for building andpython:3.9-slimfor production - Compare image sizes between single-stage and multi-stage builds
- Add health checks to your docker-compose.yml
- Configure proper logging with Docker log drivers
- Set up a volume for persistent data (database files)
🦄 Production Servers
Flask's built-in server (`app.run()`) is for development only. It is slow and insecure.
The development server is single-threaded and can crash easily under load. Use a WSGI server like Gunicorn.
# Run Gunicorn with 4 worker processes
# app:app means "look in app.py for the object named app"
$ gunicorn -w 4 -b 0.0.0.0:8000 app:app
# -w 4: Use 4 parallel workers (good for 2-core CPU)
# -b ...: Bind to port 8000 on all interfaces
Goal: Implement structured logging for production debugging.
- Configure Python's
loggingmodule with different levels (DEBUG, INFO, WARNING, ERROR) - Create a logging configuration that:
- Logs to console in development
- Logs to files with rotation in production
- Includes timestamps, log levels, and source location
- Add request logging middleware to your Flask app
- Log important events: startup, shutdown, errors, slow requests
- Never log sensitive data (passwords, API keys, personal info)
Example logging config:
import logging
from logging.handlers import RotatingFileHandler
def setup_logging(app):
handler = RotatingFileHandler(
'logs/app.log', maxBytes=10000000, backupCount=5
)
handler.setFormatter(logging.Formatter(
'[%(asctime)s] %(levelname)s in %(module)s: %(message)s'
))
app.logger.addHandler(handler)
app.logger.setLevel(logging.INFO)
Set up automated testing and deployment with GitHub Actions:
- Create
.github/workflows/ci.ymlfor your repository - Configure the workflow to:
- Trigger on push to main and pull requests
- Set up Python environment
- Install dependencies from requirements.txt
- Run linting with flake8 or black --check
- Run tests with pytest and generate coverage report
- Add a badge to your README showing CI status
- Configure branch protection to require passing CI
Starter workflow:
name: Python CI
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.9'
- name: Install dependencies
run: |
pip install -r requirements.txt
pip install -r requirements-dev.txt
- name: Lint with flake8
run: flake8 . --count --show-source --statistics
- name: Test with pytest
run: pytest --cov=app tests/
📜 DevOps Cheat Sheet
# ═══ PIP ═══
pip freeze > requirements.txt
pip install -r requirements.txt
# ═══ DOCKER ═══
docker build -t my-app .
docker run -p 8000:8000 my-app
# ═══ ENVIRONMENT ═══
export DB_URL="postgresql://..." # Linux/Mac
set DB_URL="postgresql://..." # Windows CMD
$env:DB_URL="postgresql://..." # PowerShell
# ═══ PYTHON ═══
import os
secret = os.getenv('SECRET_KEY')
- Never Hardcode Secrets: API keys, passwords, and database URLs must live in environment variables, never committed to Git. Use
os.getenv()to access them. - Pin All Dependencies:
requirements.txtneeds exact versions (Flask==2.3.0). Unpinned versions break production when libraries update unexpectedly. - Docker Ensures Consistency: "It works on my machine" becomes "It works everywhere" when your app + dependencies are containerized together.
- Use Production Servers: Flask's dev server is not secure or performant. Use Gunicorn (or uWSGI) for production deployments.
- Test Locally First: Run your Docker container locally (
docker run) to catch config errors before deploying to production servers. - Separate Config from Code: Development, staging, and production environments need different configs. Environment variables make this possible without code changes.
- Automate Deployment: CI/CD pipelines (GitHub Actions, GitLab CI) automate testing and deployment, reducing human error and deployment friction.
- Monitor and Log: Production apps need logging (to files or services like Sentry). If it breaks at 3 AM, you need logs to debug remotely.