📖 Welcome to Week 3: Level Up Your Python
Think of this lesson as unlocking new abilities in your Python skill tree. You've learned variables, loops, and functions—now it's time to learn the professional patterns that make your code cleaner, more powerful, and more Pythonic.
- OOP (Object-Oriented Programming) — Create your own custom data types with classes
- Decorators — Add superpowers to functions without changing their code
- List Comprehensions — Transform data in one elegant line
- Closures — Create functions that "remember" values
- Code Quality — Recognize bad patterns and learn Pythonic solutions
📚 Key Terms to Know
@.🗺️ Today's Topics
Click any card below to jump directly to that section:
Each topic shows the same problem solved 4 different ways:
🔴 Bad Practice — Code that works but has problems (bugs, inefficiency, poor style)
🟠 Novice Solution — Basic working code that beginners might write
🔵 Intermediate Solution — Better code with some improvements
🟢 Best Practice — The Pythonic way professionals write it
Click the tabs in each example to see the progression. Understanding why each level improves is more important than memorizing syntax!
These aren't just academic exercises—they're patterns used everywhere:
• Flask's @app.route() — That's a decorator!
• Pandas DataFrames — That's OOP (objects with methods)
• Data transformations — List comprehensions are the go-to tool
• React-style callbacks — Closures power event handlers
1. Don't memorize—understand the "why": Focus on WHY each level is better than the previous. The syntax will stick when you understand the reasoning.
2. Read code out loud: If you can't explain what a line does in plain English, you don't understand it yet. Ask questions!
3. Run the examples: Click "Run" on code blocks and modify them. Break things on purpose to see what error messages say.
4. Try the playgrounds: The challenges are designed to solidify your understanding. Struggle is part of learning—it's okay to need hints!
5. Discuss with peers: The discussion questions at each section's end are meant to spark conversation. Teaching others is the best way to learn.
🏗️ Object-Oriented Programming (OOP)
OOP lets you create your own custom data types that bundle data (attributes) and behavior (methods) together. Think of a class as a blueprint, and objects as the actual things built from that blueprint.
class — The blueprint/template that defines attributes and methods
object / instance — A specific thing created from a class
self — Reference to the current object (always first parameter in methods)
__init__ — The constructor method, called when creating new objects
method — A function that belongs to a class
attribute — A variable that belongs to an object (e.g., self.name)
Example 1: Creating a Player Class
Let's model a game player with a name, level, and the ability to calculate attack power. Watch how the code improves at each level:
# ❌ BAD PRACTICE: Using a dictionary instead of a class
# Problem 1: No structure - anyone can add/remove keys
# Problem 2: No methods - behavior is separate from data
# Problem 3: No validation - can set invalid values
# Problem 4: No IDE autocomplete or type hints
# Creating a "player" as a dictionary
player = {
'name': 'Ada',
'level': 5
}
# Calculating power as a separate function (not connected to player)
def get_power(player_dict):
return player_dict['level'] * 3
# Using the dictionary
print(player)
print(f"{player['name']} attacks with power {get_power(player)}")
# ⚠️ Nothing stops us from doing bad things:
# player['level'] = -999 # Invalid level!
# player['naem'] = 'typo' # Typo creates new key!
No structure: Dictionaries accept any keys, so typos create bugs silently.
No encapsulation: The get_power() function is separate from the data it operates on.
No validation: You can set level to -999 or "banana" and nothing stops you.
Hard to maintain: As your code grows, tracking what keys exist becomes a nightmare.
# 🔰 NOVICE: Basic class structure
# ✓ Data and behavior are now together
# ✓ Clear structure with defined attributes
# ✗ Print output is ugly (memory address)
# ✗ No default values
class Player:
# __init__ is the CONSTRUCTOR - runs when you create an object
# 'self' refers to the object being created
def __init__(self, name, level):
# Store the parameters as ATTRIBUTES on the object
self.name = name # self.name belongs to THIS object
self.level = level # self.level belongs to THIS object
# A METHOD - a function that belongs to the class
def power(self):
# 'self' gives us access to this object's attributes
return self.level * 3
# Create an INSTANCE (object) of the Player class
p = Player('Ada', 5)
# Access attributes with dot notation
print(p) # Ugly output: <__main__.Player object at 0x...>
print(f"{p.name} attacks with power {p.power()}")
Encapsulation: Data (name, level) and behavior (power()) are bundled together.
Clear structure: You know exactly what attributes a Player has.
IDE support: Your editor can now autocomplete p.name, p.level, p.power().
# 📈 INTERMEDIATE: Better printing and defaults
# ✓ __repr__ gives readable output when printing
# ✓ Default parameter for level
# ✗ Still no type hints
# ✗ No docstrings explaining the class
class Player:
def __init__(self, name, level=1): # level defaults to 1
self.name = name
self.level = level
def power(self):
return self.level * 3
# __repr__ defines how the object looks when printed
# Convention: return a string that could recreate the object
def __repr__(self):
return f"Player(name='{self.name}', level={self.level})"
# Now printing shows useful information!
p = Player('Ada', level=5)
print(p) # Player(name='Ada', level=5)
print(f"{p.name} attacks with power {p.power()}")
# Default value in action
newbie = Player('Bob') # level defaults to 1
print(newbie)
__repr__: Now print(p) shows Player(name='Ada', level=5) instead of a memory address.
Default values: level=1 means new players start at level 1 automatically.
Debugging: When something goes wrong, you can see the object's state clearly.
# ⭐ BEST PRACTICE: Production-ready class
# ✓ Type hints for all parameters and return values
# ✓ Docstrings explaining purpose and usage
# ✓ Input validation to prevent invalid data
# ✓ Clean, readable, maintainable code
class Player:
"""
Represents a game player with a name and level.
Attributes:
name (str): The player's display name
level (int): The player's current level (1-100)
Example:
>>> player = Player('Ada', level=5)
>>> player.power()
15
"""
def __init__(self, name: str, level: int = 1) -> None:
"""Initialize a new Player instance."""
# Validate inputs to catch bugs early
if not name:
raise ValueError("Name cannot be empty")
if not 1 <= level <= 100:
raise ValueError("Level must be between 1 and 100")
self.name = name
self.level = level
def power(self) -> int:
"""Calculate the player's attack power."""
return self.level * 3
def __repr__(self) -> str:
return f"Player(name='{self.name}', level={self.level})"
# Usage
p = Player('Ada', level=5)
print(p)
print(f"{p.name} attacks with power {p.power()}")
newbie = Player('Bob')
print(newbie)
# This would raise an error (uncomment to test):
# bad_player = Player('', level=999) # ValueError!
Type hints (name: str): Your IDE can catch type errors before you run the code.
Docstrings: Other developers (and future you) know how to use the class.
Validation: Invalid data is rejected immediately with clear error messages.
Professional standard: This is how classes look in production codebases.
Example 2: Inheritance (Creating Specialized Classes)
Inheritance lets you create new classes based on existing ones. A Mage is a type of Character with extra abilities.
# ❌ BAD: Copy-pasting code between similar classes
# Problem: If you fix a bug in one, you must fix it everywhere!
class Warrior:
def __init__(self, name, level):
self.name = name # Copied code
self.level = level # Copied code
self.strength = 10
def attack(self):
return self.strength + self.level
class Mage:
def __init__(self, name, level):
self.name = name # Same copied code!
self.level = level # Same copied code!
self.mana = 100
def cast(self, spell):
return f"{self.name} casts {spell} for {self.level * 6} damage"
# Works, but code is duplicated
m = Mage('Turing', 4)
print(m.cast('Fireball'))
DRY violation: "Don't Repeat Yourself" - the same code is in multiple places.
Maintenance nightmare: Fix a bug in Warrior.__init__? You have to remember to fix Mage too.
No relationship: Python doesn't know that Warriors and Mages are both "characters".
# 🔰 NOVICE: Basic inheritance
# Parent class (also called "base" or "super" class)
class Character:
def __init__(self, name, level=1):
self.name = name
self.level = level
# Child class - inherits from Character
# Syntax: class ChildClass(ParentClass)
class Mage(Character): # Mage "is a" Character
def __init__(self, name, level=1, mana=100):
# super() calls the parent class's method
super().__init__(name, level) # Let Character set name & level
self.mana = mana # Mage-specific attribute
def cast(self, spell):
return f"{self.name} casts {spell} for {self.level * 6} damage"
m = Mage('Turing', 4)
print(m.cast('Fireball'))
super().__init__(): Calls the parent's constructor, avoiding code duplication.
Clear hierarchy: A Mage IS-A Character - Python knows the relationship.
Single source: Change how characters work? Just edit the Character class.
# 📈 INTERMEDIATE: Composition - "has a" vs "is a"
# Instead of inheriting everything, objects can CONTAIN other objects
class Inventory:
"""Manages a collection of items."""
def __init__(self):
self.items = []
def add(self, item):
self.items.append(item)
def remove(self, item):
self.items.remove(item)
class Character:
def __init__(self, name, level=1):
self.name = name
self.level = level
# COMPOSITION: Character HAS-AN Inventory
self.inventory = Inventory() # Create an Inventory object
def pick_up(self, item):
self.inventory.add(item)
print(f"{self.name} picked up {item}")
# Using composition
hero = Character('Ada')
hero.pick_up('Health Potion')
print(f"{hero.name}'s inventory: {hero.inventory.items}")
Inheritance (is-a): "A Mage IS A Character" - use when there's a true hierarchy.
Composition (has-a): "A Character HAS AN Inventory" - use when objects contain other objects.
Rule of thumb: Prefer composition - it's more flexible and easier to change later.
# ⭐ BEST PRACTICE: Use @dataclass for data-focused classes
# dataclass auto-generates __init__, __repr__, __eq__, etc.
from dataclasses import dataclass
@dataclass
class Item:
"""An item that can be bought or sold."""
name: str
price: float
quantity: int = 1 # Default value
def total_value(self) -> float:
"""Calculate total value of this item stack."""
return self.price * self.quantity
# dataclass auto-generates:
# __init__(self, name, price, quantity=1)
# __repr__(self) -> "Item(name='...', price=..., quantity=...)"
# __eq__(self, other) -> compares all fields
potion = Item('Potion', 9.99, quantity=3)
print(potion) # Automatic nice output!
print(f"Total value: ${potion.total_value():.2f}")
Less boilerplate: No need to write __init__, __repr__, etc. manually.
Type hints required: Forces you to document your data types.
When to use: Classes that are mainly about storing data (like database records, API responses, game items).
When NOT to use: Classes with complex initialization logic or heavy behavior.
Forgetting self: Every method needs self as the first parameter, and you must use self. to access attributes.
Calling methods wrong: Use obj.method() not method(obj) from outside the class.
Mutable default arguments: Never use def __init__(self, items=[]) — all instances share the same list! Use None and create a new list inside.
Overusing inheritance: Ask "IS-A" vs "HAS-A". Often composition (objects containing other objects) is cleaner than inheritance.
Create a Weapon class with name, damage, and speed attributes, plus a dps() method that returns damage per second (damage × speed).
self.attribute_name to store values in __init__ and access them in dps().
Create a BankAccount class that manages deposits and withdrawals:
- Initialize with
owner(string) and optionalbalance(default 0) - Create a
deposit(amount)method that adds to balance and returns new balance - Create a
withdraw(amount)method that subtracts from balance (if sufficient funds) and returns new balance, or prints "Insufficient funds" and returns current balance - Create a
get_balance()method that returns the current balance - Add a
__repr__method that shows the account info nicely
Test your class:
account = BankAccount("Alice", 100)
print(account.deposit(50)) # Should print 150
print(account.withdraw(30)) # Should print 120
print(account.withdraw(200)) # Should print "Insufficient funds" and 120
print(account) # Should show nice repr
Build a shape hierarchy using inheritance:
- Create a base
Shapeclass with anameattribute and anarea()method that raisesNotImplementedError - Create a
Rectanglesubclass withwidthandheight, overridearea()to return width × height - Create a
Circlesubclass withradius, overridearea()to return π × radius² (useimport math) - Add a
__repr__to each class showing shape info and area
Bonus: Create a function total_area(shapes) that takes a list of shapes and returns their combined area.
shapes = [Rectangle(4, 5), Circle(3), Rectangle(2, 2)]
for s in shapes:
print(f"{s.name}: {s.area():.2f}")
print(f"Total: {total_area(shapes):.2f}")
1. When would you use a class vs just using dictionaries or functions? Give an example from a project you might build.
2. What's the difference between inheritance (IS-A) and composition (HAS-A)? Which is better for a game with characters that can equip weapons?
3. Why is __repr__ useful for debugging? Have you ever been confused by seeing <object at 0x...> in your output?
4. When should you use @dataclass instead of writing a regular class? What are the trade-offs?
Open notebook-sessions/week3/session1_more_python_skills.ipynb and build your own Character class with __repr__, methods, and a child class (e.g., Mage or Archer).
✨ Decorators
Decorators are functions that wrap other functions to add extra behavior. They let you modify what happens before or after a function runs—without changing the function's code.
@decorator — The @ syntax that applies a decorator to a function
wrapper — The inner function that wraps the original function
*args, **kwargs — Catch-all for any arguments passed to the function
@wraps(fn) — Preserves the original function's name and docstring
Imagine you have a gift (your function). A decorator is like wrapping paper—it goes around the gift, can add a bow or tag (extra behavior), but the gift inside stays the same.
@logging adds a note saying "Gift was opened at 3pm"
@timer adds a stopwatch to see how long opening took
@require_login adds a lock that checks if you're allowed to open it
Example 1: Creating a Logging Decorator
Let's create a decorator that prints a message whenever a function is called:
# ❌ BAD: Copy-pasting logging code into every function
# Problem 1: Duplicate code everywhere
# Problem 2: Hard to change logging format
# Problem 3: Easy to forget in some functions
def greet(name):
print("[LOG] Calling greet") # Copied logging
print(f"Hello, {name}")
def farewell(name):
print("[LOG] Calling farewell") # Same copied logging!
print(f"Goodbye, {name}")
# If you want to change the log format, you have to
# change it in EVERY function!
greet('Ada')
farewell('Ada')
DRY violation: The same logging code is copied into every function.
Maintenance problem: Want to change "[LOG]" to "[DEBUG]"? Edit every single function.
Mixing concerns: The function now does two things: logging AND its actual job.
# 🔰 NOVICE: First attempt at a decorator
# ✓ Logging code is in one place
# ✗ Doesn't show the actual function name
# ✗ Doesn't handle function arguments properly
# A decorator is a function that takes a function as input
def log(func):
# It returns a NEW function (the "wrapper")
def wrapper():
print("[LOG] Calling a function")
func() # Call the original function
return wrapper # Return the wrapper, NOT call it
# @log is equivalent to: greet = log(greet)
@log
def greet():
print("Hello, Ada")
# When we call greet(), we're actually calling wrapper()
greet()
# ⚠️ Problem: This breaks if greet() takes arguments!
# greet("Bob") # TypeError: wrapper() takes 0 positional arguments
Separation: Logging logic is now separate from the function.
Reusable: Apply @log to any function.
But: This simple version can't handle functions with arguments!
# 📈 INTERMEDIATE: Handle any arguments
# ✓ Works with any function signature
# ✓ Shows the function name
# ✗ Loses the original function's name and docstring
def log(func):
# *args catches positional args: (1, 2, 3)
# **kwargs catches keyword args: (name="Ada", age=25)
def wrapper(*args, **kwargs):
# func.__name__ gets the original function's name
print(f"[LOG] Calling {func.__name__}")
# Pass all arguments through to the original function
result = func(*args, **kwargs)
return result # Don't forget to return the result!
return wrapper
@log
def greet(name):
print(f"Hello, {name}!")
@log
def add(a, b):
return a + b
greet('Ada')
print(f"Result: {add(3, 5)}")
# ⚠️ Problem: greet.__name__ is now "wrapper", not "greet"!
*args, **kwargs: The wrapper can accept ANY arguments and pass them through.
func.__name__: We can access the original function's name.
Return value: We capture and return the result properly.
# ⭐ BEST PRACTICE: Preserve function metadata
# ✓ @wraps preserves __name__, __doc__, etc.
# ✓ Type hints for the decorator
# ✓ Logs arguments for debugging
from functools import wraps
from typing import Callable, Any
def log(func: Callable) -> Callable:
"""Decorator that logs function calls with arguments."""
@wraps(func) # This copies __name__, __doc__, etc. from func to wrapper
def wrapper(*args: Any, **kwargs: Any) -> Any:
# Log the call with all arguments for debugging
print(f"[LOG] Calling {func.__name__} with args={args} kwargs={kwargs}")
return func(*args, **kwargs)
return wrapper
@log
def greet(name: str) -> None:
"""Greet someone by name."""
print(f"Hello, {name}!")
greet('Ada')
# Now the function keeps its original identity!
print(f"Function name: {greet.__name__}") # "greet" not "wrapper"
print(f"Docstring: {greet.__doc__}") # Preserved!
@wraps(func): Copies the original function's __name__, __doc__, and other metadata to the wrapper.
Debugging: Tools like debuggers and documentation generators see the real function name.
Type hints: Makes the decorator's purpose clear to other developers.
Example 2: Timing How Long Functions Take
A practical decorator that measures execution time—useful for performance profiling:
# ❌ BAD: Mixing timing code into the function
import time
def slow_function():
start = time.time() # Timing code
# --- Actual function logic ---
time.sleep(0.5) # Simulate slow operation
result = "Done!"
# --- End function logic ---
end = time.time() # More timing code
print(f"slow_function took {end - start:.2f} seconds")
return result
print(f"Result: {slow_function()}")
# Problems:
# - Function now does TWO things: its job AND timing
# - Can't easily turn timing on/off
# - Must add timing code to EVERY function you want to time
# 🔰 NOVICE: Basic timing decorator
import time
def timer(func):
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
end = time.time()
print(f"[TIMER] {func.__name__} took {end - start:.2f}s")
return result
return wrapper
@timer
def slow_function():
time.sleep(0.5)
return "Done!"
print(f"Result: {slow_function()}")
# ⭐ BEST PRACTICE: Production-ready timing decorator
import time
from functools import wraps
from typing import Callable, Any
def timer(func: Callable) -> Callable:
"""
Decorator that measures and prints function execution time.
Uses time.perf_counter() for high-precision timing.
"""
@wraps(func)
def wrapper(*args: Any, **kwargs: Any) -> Any:
# perf_counter() is more accurate than time.time()
start = time.perf_counter()
try:
result = func(*args, **kwargs)
return result
finally:
# 'finally' ensures timing runs even if function raises error
elapsed = time.perf_counter() - start
print(f"[⏱️ TIMER] {func.__name__} completed in {elapsed*1000:.2f}ms")
return wrapper
@timer
def slow_function() -> str:
"""Simulate a slow operation."""
time.sleep(0.5)
return "Done!"
print(f"Result: {slow_function()}")
time.perf_counter(): More accurate than time.time() for measuring short durations.
try/finally: The timing message prints even if the function crashes.
Milliseconds: elapsed*1000 gives milliseconds for readability.
Flask: @app.route('/home') — registers a function as a URL handler
Django: @login_required — restricts access to logged-in users
Python: @property — makes a method act like an attribute
Python: @staticmethod — creates a method that doesn't need self
Testing: @pytest.fixture — sets up test data
Forgetting to return the result: If the decorated function returns a value, your wrapper MUST return func(*args, **kwargs).
Not using @wraps: Without it, your function loses its name and docstring, breaking help() and debugging tools.
Calling the decorator: Use @timer not @timer() for simple decorators. The () is only for decorators with parameters.
Confusing decorator order: @a @b def func(): means a(b(func)) — bottom decorator is applied first!
Create a decorator called @count_calls that counts how many times a function has been called and prints the count.
nonlocal count inside the wrapper to modify the outer count variable. This is actually a closure (next topic!).
Create a @timer decorator that measures function execution time:
- Import
timeat the top of your code - Record the start time before calling the wrapped function using
time.perf_counter() - Call the original function and store its result
- Record the end time after the function completes
- Print the function name and elapsed time in milliseconds
- Return the original function's result
- Use
@functools.wraps(func)to preserve function metadata
Test your decorator:
import time
import functools
@timer
def slow_function():
time.sleep(0.5) # Simulate slow work
return "Done!"
result = slow_function()
# Should print something like: "slow_function took 500.12 ms"
print(result) # Should print: "Done!"
1. What are some real-world uses for decorators you can imagine? (Hint: Think about logging, security, validation, caching...)
2. Why is separation of concerns important? How do decorators help keep your code clean?
3. When would a decorator make code MORE confusing instead of clearer? When should you just use a regular function call?
4. Can you think of a decorator you'd want for your own projects? What behavior would it add?
Open notebook-sessions/week3/session1_more_python_skills.ipynb and implement @log and @timer decorators on your own functions. Bonus: write a @require_role('admin') decorator.
⚡ List Comprehensions
Comprehensions are Python's elegant way to create new lists, dicts, or sets by transforming existing data. They replace multi-line loops with a single, readable expression.
Basic syntax: [expression for item in iterable]
With filter: [expression for item in iterable if condition]
Dict: {key: value for item in iterable}
Set: {expression for item in iterable}
Generator: (expression for item in iterable) — lazy, memory efficient
Example 1: Squaring Numbers
Let's transform a list of numbers into their squares:
# ❌ BAD: Using index-based loop (C-style)
# Problem 1: Verbose and error-prone
# Problem 2: Easy to get off-by-one errors
# Problem 3: Not Pythonic
nums = [0, 1, 2, 3, 4]
squares = []
# Using index like in C/Java - NOT Pythonic!
i = 0
while i < len(nums):
squares.append(nums[i] * nums[i])
i += 1 # Easy to forget this line!
print(squares)
Verbose: 6 lines for a simple transformation.
Error-prone: Forgetting i += 1 creates an infinite loop.
Unpythonic: Python has better iteration patterns.
# 🔰 NOVICE: Standard for loop
# ✓ Correct Python iteration
# ✓ Clear and readable
# ✗ Takes 4 lines for a simple task
nums = [0, 1, 2, 3, 4]
squares = [] # Start with empty list
for n in nums: # Iterate directly over elements
squares.append(n * n) # Add squared value
print(squares)
Direct iteration: for n in nums is cleaner than index-based loops.
Readable: Anyone can follow the logic step by step.
# 📈 INTERMEDIATE: List comprehension
# ✓ Single line, declarative style
# ✓ The Pythonic way to transform lists
nums = [0, 1, 2, 3, 4]
# Read as: "n squared FOR each n IN nums"
# [expression for variable in iterable]
squares = [n * n for n in nums]
print(squares)
# Anatomy of a list comprehension:
# [ n * n for n in nums ]
# └─────┘ └─┘ └────┘
# EXPRESSION VARIABLE ITERABLE
One line: Entire transformation in a single expression.
Declarative: Says WHAT you want, not HOW to get it step by step.
Faster: Comprehensions are optimized internally.
# ⭐ BEST PRACTICE: Comprehension with condition
# ✓ Transform AND filter in one expression
# ✓ Highly readable when you know the pattern
nums = range(10) # 0, 1, 2, ..., 9
# Square only the ODD numbers
# Read as: "n squared FOR each n IN nums IF n is odd"
odd_squares = [n * n for n in nums if n % 2 == 1]
print(odd_squares)
# Multiple examples:
names = ['Ada', 'Bob', 'Grace', 'Al']
# Names longer than 2 characters, uppercased
long_names = [name.upper() for name in names if len(name) > 2]
# Result: ['ADA', 'BOB', 'GRACE']
# Anatomy with condition:
# [expression for var in iterable if condition]
# └────────────┘
# FILTER (optional)
Transform + Filter: One expression does both!
When NOT to use: If logic is complex, use a regular loop for clarity.
Rule of thumb: If you can't read it out loud naturally, it's too complex.
Example 2: Dictionary & Set Comprehensions
The same pattern works for dicts and sets:
# ⭐ DICT COMPREHENSION: {key: value for item in iterable}
names = ['ada', 'bob', 'grace']
# Create a dict mapping each name to its length
name_lengths = {name: len(name) for name in names}
print(name_lengths) # {'ada': 3, 'bob': 3, 'grace': 5}
# ⭐ SET COMPREHENSION: {expression for item in iterable}
words = ['ada', 'bob', 'grace', 'turing']
# Get unique word lengths (sets automatically remove duplicates)
unique_lengths = {len(w) for w in words}
print(unique_lengths) # {3, 5, 6}
# Comparison: [] = list, {} = dict/set, () = generator
Too complex: If your comprehension spans multiple lines or is hard to read aloud, use a regular loop instead. Readability matters!
Nested comprehensions: [x for row in matrix for x in row] can be confusing. Sometimes nested loops are clearer.
Side effects: Don't use comprehensions for functions with side effects (printing, writing files). Use regular loops for that.
Memory with large data: List comprehensions create the entire list in memory. Use generator expressions (x for x in ...) for large datasets.
Create a dict mapping each name to its UPPERCASE form, but only for names longer than 3 characters.
{name: name.upper() for name in names if len(name) > 3}
Use list comprehensions to filter and transform the following data:
students = [
{"name": "Alice", "grade": 85, "subject": "Math"},
{"name": "Bob", "grade": 72, "subject": "Science"},
{"name": "Charlie", "grade": 90, "subject": "Math"},
{"name": "Diana", "grade": 68, "subject": "History"},
{"name": "Eve", "grade": 95, "subject": "Math"},
]
- Create a list of names of students with grades >= 80:
["Alice", "Charlie", "Eve"] - Create a list of Math students' names:
["Alice", "Charlie", "Eve"] - Create a dict mapping names to grades for passing students (grade >= 70):
{"Alice": 85, "Bob": 72, ...} - Create a list of (name, grade) tuples sorted by grade descending
- Bonus: Calculate the average grade using a generator expression
Starter code:
honor_roll = [s["name"] for s in students if ___]
math_students = [___ for s in students if ___]
passing_grades = {___: ___ for s in students if ___}
average = sum(___ for s in students) / len(students)
1. When would you choose a list comprehension over a regular for loop? When would you choose the loop?
2. What's the difference between [x for x in range(1000000)] and (x for x in range(1000000))? When does it matter?
3. Have you seen comprehensions in Pandas or other data science libraries? How do they relate to operations like df['col'].apply()?
4. Can you think of a data transformation problem where a comprehension would make your code much cleaner?
Open notebook-sessions/week3/session2_more_python_skills_group.ipynb and write list/dict/set comprehensions to transform a dataset. Bonus: compare comprehension vs loop performance on 100k rows.
🔒 Closures
A closure is a function that "remembers" variables from its enclosing scope, even after that scope has finished executing. This lets you create functions with persistent state.
enclosing scope — The outer function that contains the inner function
free variable — A variable used in the inner function but defined in the enclosing scope
nonlocal — Keyword to modify a variable from the enclosing scope
function factory — A function that returns customized functions
Imagine a function is a person. When they leave a room (scope), they can take things with them in a backpack (the closure). Even after the room is gone, they still have access to what's in their backpack.
The inner function "closes over" the variables it needs, keeping them alive.
Example 1: Creating a Multiplier Factory
A function that creates customized multiplier functions:
# ❌ BAD: Using global variables for configuration
# Problem: The function's behavior depends on external state
multiplier = 2 # Global variable - anyone can change it!
def multiply(x):
return x * multiplier # Uses global - dangerous!
print(f"double(5) = {multiply(5)}") # 10
# Someone else changes the global...
multiplier = 3
print(f"multiplier = {multiplier}")
# Our "double" function is now broken!
print(f"double(5) = {multiply(5)}") # 15 - should be 10!
# Problems:
# - Global state is hard to track and debug
# - Can't have multiple multipliers at once
# - Any code can accidentally change the behavior
Unpredictable: Any code anywhere can change the global and break your function.
Not reusable: Can't have both double() and triple() at the same time.
Hard to test: Tests can interfere with each other via shared global state.
# 🔰 NOVICE: Pass the multiplier as a parameter
# ✓ No global state
# ✗ Must pass the multiplier EVERY time you call
def multiply(x, factor):
return x * factor
# Works, but clunky
print(f"double(5) = {multiply(5, 2)}")
print(f"triple(5) = {multiply(5, 3)}")
# What if we want a "dedicated" double function?
# We'd have to keep passing 2 everywhere...
# multiply(10, 2), multiply(20, 2), multiply(30, 2)...
No global: The factor is explicit, not hidden state.
But: It's tedious to keep passing the same value repeatedly.
# ⭐ BEST PRACTICE: Use a closure to create specialized functions
# ✓ Each function "remembers" its own factor
# ✓ No global state, no repeated parameters
def make_multiplier(factor):
"""
Factory function that creates customized multiplier functions.
Args:
factor: The number to multiply by
Returns:
A function that multiplies its input by factor
"""
# This inner function "closes over" the 'factor' variable
def multiply(x):
return x * factor # 'factor' is remembered!
return multiply # Return the function itself
# Create specialized functions
double = make_multiplier(2) # double remembers factor=2
triple = make_multiplier(3) # triple remembers factor=3
# Each function has its own "memory"
print(f"double(5) = {double(5)}") # 10
print(f"triple(5) = {triple(5)}") # 15
print(f"double(7) = {double(7)}") # 14
print(f"triple(7) = {triple(7)}") # 21
Encapsulation: The factor is private to each function—no one can accidentally change it.
Reusable: Create as many specialized functions as you need.
Clean API: double(5) is cleaner than multiply(5, 2).
Example 2: Creating a Stateful Counter
Closures can maintain mutable state between calls:
# ⭐ CLOSURE WITH MUTABLE STATE
# Use 'nonlocal' to modify variables from enclosing scope
def make_counter(start=0):
"""Create a counter that tracks its own count."""
count = start # This variable is "enclosed" by inner function
def increment():
nonlocal count # Required to MODIFY (not just read) outer variable
count += 1
return count
return increment
# Create two independent counters
counter_a = make_counter()
counter_b = make_counter()
# Each maintains its own state!
print(f"counter_a: {counter_a()}") # 1
print(f"counter_a: {counter_a()}") # 2
print(f"counter_b: {counter_b()}") # 1 (independent!)
print(f"counter_a: {counter_a()}") # 3
print(f"counter_b: {counter_b()}") # 2
nonlocal — Refers to variable in the enclosing function (one level up)
global — Refers to variable at module level (top of file) — avoid when possible!
Without either: Python treats the variable as local, causing an error if you try to assign to it.
Forgetting nonlocal: If you try to modify an outer variable without nonlocal, Python creates a new local variable instead!
Loop variable bug: In [lambda: x for x in range(3)], all lambdas share the same x reference (value at loop end). Use default args to capture: [lambda x=x: x for x in range(3)].
Overusing closures: Sometimes a class with methods is clearer than nested functions with nonlocal.
Memory leaks: Closures keep outer variables alive. Be careful with large objects in long-lived closures.
Create a closure that tracks a running average. Each call adds a number and returns the current average.
nonlocal total, count inside add_number. Update both, then return total / count.
Create a closure that generates customizable counter functions:
- Create
make_counter(start=0, step=1)that returns a counter function - Each call to the returned function should increment by
stepand return the current count - The counter should start at
start - Multiple counters should be independent of each other
Test your counter factory:
counter_a = make_counter() # starts at 0, step 1
counter_b = make_counter(start=10, step=5) # starts at 10, step 5
print(counter_a()) # 1
print(counter_a()) # 2
print(counter_b()) # 15
print(counter_b()) # 20
print(counter_a()) # 3 (independent!)
Bonus challenges:
- Add a
reset()function that resets to the start value - Add a
get_count()function that returns current count without incrementing - Return a dict of functions:
{"next": ..., "reset": ..., "get": ...}
1. What's the difference between a closure and a class? When would you choose one over the other?
2. How do closures relate to decorators? (Hint: Decorators ARE closures!)
3. Can you think of a situation where you'd want a function to "remember" previous values or configuration?
4. Why is nonlocal necessary? What happens if you forget it?
Open notebook-sessions/week3/session2_more_python_skills_group.ipynb and implement the make_averager() closure. Extend it with reset and count methods using returned functions.
📋 Quick Reference Cheat Sheet
Copy-paste these patterns as a quick reference for today's topics!
🏗️ Object-Oriented Programming
# ═══ BASIC CLASS ═══
class Player:
def __init__(self, name: str, level: int = 1):
self.name = name # Attribute
self.level = level
def power(self) -> int: # Method
return self.level * 3
def __repr__(self): # Nice printing
return f"Player('{self.name}', {self.level})"
# ═══ INHERITANCE ═══
class Mage(Player): # Mage IS-A Player
def __init__(self, name, level=1, mana=100):
super().__init__(name, level) # Call parent
self.mana = mana
# ═══ DATACLASS (Python 3.7+) ═══
from dataclasses import dataclass
@dataclass
class Item:
name: str
price: float
quantity: int = 1 # Auto-generates __init__, __repr__!
✨ Decorators
from functools import wraps
# ═══ BASIC DECORATOR ═══
def my_decorator(func):
@wraps(func) # Preserves function name/docstring
def wrapper(*args, **kwargs):
print("Before function call")
result = func(*args, **kwargs)
print("After function call")
return result
return wrapper
@my_decorator
def greet(name):
print(f"Hello, {name}!")
# ═══ DECORATOR WITH PARAMETERS ═══
def repeat(times):
def decorator(func):
def wrapper(*args, **kwargs):
for _ in range(times):
func(*args, **kwargs)
return wrapper
return decorator
@repeat(3) # Calls say_hi() 3 times
def say_hi(): print("Hi!")
⚡ Comprehensions
# ═══ LIST COMPREHENSION ═══
squares = [x**2 for x in range(5)] # [0, 1, 4, 9, 16]
evens = [x for x in range(10) if x % 2 == 0] # [0, 2, 4, 6, 8]
# ═══ DICT COMPREHENSION ═══
name_lengths = {name: len(name) for name in ['Ada', 'Bob']}
# {'Ada': 3, 'Bob': 3}
# ═══ SET COMPREHENSION ═══
unique_lengths = {len(w) for w in ['hi', 'hello', 'yo']}
# {2, 5}
# ═══ GENERATOR EXPRESSION (lazy, memory efficient) ═══
gen = (x**2 for x in range(1000000)) # Doesn't compute until needed
print(next(gen)) # 0 - compute one at a time
🔒 Closures
# ═══ FUNCTION FACTORY ═══
def make_multiplier(factor):
def multiply(x):
return x * factor # 'factor' is remembered!
return multiply
double = make_multiplier(2)
triple = make_multiplier(3)
print(double(5)) # 10
# ═══ STATEFUL CLOSURE ═══
def make_counter():
count = 0
def increment():
nonlocal count # Required to MODIFY outer variable
count += 1
return count
return increment
counter = make_counter()
print(counter()) # 1
print(counter()) # 2
OOP: Bundle data + behavior. Use self for attributes, __init__ for construction, @dataclass for simple data objects.
Decorators: Wrap functions with @decorator. Use @wraps to preserve metadata. Great for logging, timing, auth.
Comprehensions: [expr for x in iter if cond]. Cleaner than loops for simple transformations.
Closures: Inner functions remember outer variables. Use nonlocal to modify them. Great for factories and state.