
Building an Automated AI Daily Brief: My Morning Workflow
A technical guide to building a cron-triggered AI agent that aggregates tasks and sends a consolidated morning briefing email.
The Problem: Context Switching Before Coffee
The first thirty minutes of the workday usually define the output of the next eight hours. For a long time, my morning routine involved a chaotic cycle of opening browser tabs: Google Calendar for meetings, Notion for project tracking, GitHub for pull requests, and Slack for urgent fires.
By the time I had mental clarity on what I needed to do, I had already wasted cognitive load just finding the data. I wasn't acting; I was reacting.
I wanted a system that pushed information to me, rather than me pulling it. I wanted an Executive Assistant, but purely digital. The goal: A single email, waiting in my inbox at 7:00 AM, containing a synthesized view of my day, prioritized by an LLM.
Here is the architecture, code, and deployment strategy for my AI Daily Brief Workflow.
The Architecture
This isn't a complex microservice architecture; it's a robust script designed for reliability. We are building a linear pipeline:
- Ingest: Fetch raw data from sources (Google Calendar & Notion/Todoist).
- Process: Sanitize the data into a JSON payload.
- Reason: Send the payload to OpenAI (GPT-4o) with a specific system prompt to act as a Project Manager.
- Deliver: Render an HTML email and send it via SMTP.
- Schedule: Run reliably via GitHub Actions (Cron) with failure notifications.
The Tech Stack
- Language: Python 3.10+
- LLM: OpenAI API (GPT-4o for reasoning capability)
- Database: None (Stateless run)
- Infrastructure: GitHub Actions (Free tier allows 2,000 minutes/month, more than enough for a daily script).
Step 1: Aggregating the Data
First, we need to gather the context. For this example, I'll focus on fetching tasks from a Notion database, though the logic applies equally to Todoist or Jira APIs.
We need a function that returns a clean list of pending tasks. We don't want the LLM to hallucinate tasks, so we fetch strictly from the source of truth.
import os
import requests
from datetime import datetime
NOTION_TOKEN = os.getenv("NOTION_TOKEN")
DATABASE_ID = os.getenv("NOTION_DB_ID")
def fetch_active_tasks():
url = f"https://api.notion.com/v1/databases/{DATABASE_ID}/query"
headers = {
"Authorization": f"Bearer {NOTION_TOKEN}",
"Notion-Version": "2022-06-28",
"Content-Type": "application/json"
}
# Filter for status not done
payload = {
"filter": {
"property": "Status",
"status": {
"does_not_equal": "Done"
}
}
}
response = requests.post(url, json=payload, headers=headers)
data = response.json()
tasks = []
for item in data.get("results", []):
tasks.append({
"task": item["properties"]["Name"]["title"][0]["text"]["content"],
"due": item["properties"]["Due Date"]["date"]["start"] if item["properties"]["Due Date"]["date"] else "No Date",
"priority": item["properties"]["Priority"]["select"]["name"]
})
return tasksDev Note: Always handle your API keys via environment variables. Never hardcode them.
Step 2: The Logic Layer (AI Synthesis)
This is where the automation moves from "dumb script" to "intelligent agent." If I just emailed myself a raw list of tasks, I haven't added much value. I need the AI to look at the list, look at the day of the week, and tell me what actually matters.
We construct a prompt that forces the AI to be opinionated.
from openai import OpenAI
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
def generate_briefing(tasks, calendar_events):
current_date = datetime.now().strftime("%A, %B %d")
system_prompt = """
You are a ruthless, highly efficient Executive Assistant for a Senior Developer.
Your goal is to organize the day effectively.
Input data: A JSON list of tasks and calendar events.
Output: An HTML formatted daily brief.
Structure:
1. 🛑 THE ONE THING: The single most important task for today based on priority/deadlines.
2. 📅 SCHEDULE: A quick summary of meetings (highlight conflicts).
3. 🔨 ACTION LIST: Group remaining tasks by context (Deep Work vs. Admin).
4. 🧠 MINDSET: A one-sentence stoic reminder.
Tone: Direct, professional, no fluff.
"""
user_content = f"Date: {current_date}\nTasks: {tasks}\nEvents: {calendar_events}"
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_content}
],
temperature=0.7
)
return response.choices[0].message.contentWhy GPT-4o?
For a daily planner, reasoning is cheaper than fixing mistakes. GPT-3.5 often hallucinates priorities or misinterprets dates. GPT-4o effectively handles the logic of "If I have 4 hours of meetings, I shouldn't schedule 6 hours of deep coding."
Step 3: The Transport (SMTP)
Email is the perfect delivery mechanism. It is asynchronous, searchable, and works offline. I use Python's built-in smtplib with Gmail (using an App Password).
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
def send_email(html_content):
sender_email = os.getenv("EMAIL_USER")
receiver_email = os.getenv("EMAIL_TARGET")
password = os.getenv("EMAIL_PASS")
msg = MIMEMultipart("alternative")
msg["Subject"] = f"Daily Brief: {datetime.now().strftime('%Y-%m-%d')}"
msg["From"] = sender_email
msg["To"] = receiver_email
msg.attach(MIMEText(html_content, "html"))
try:
with smtplib.SMTP_SSL("smtp.gmail.com", 465) as server:
server.login(sender_email, password)
server.sendmail(sender_email, receiver_email, msg.as_string())
print("Email sent successfully.")
except Exception as e:
print(f"Failed to send email: {e}")
# In a production environment, this should trigger an alert
raise eStep 4: Scheduling & Reliability
Here is the critical engineering lesson. How do you ensure this runs every day without maintaining a server?
The Solution: GitHub Actions Cron
We don't need a VPS. We can use a GitHub Action workflow file (.github/workflows/daily-brief.yml) to spin up a runner, install dependencies, run the script, and die.
name: Daily Brief Agent
on:
schedule:
# Runs at 11:00 UTC (Check your timezone offset!)
- cron: '0 11 * * 1-5' # Mon-Fri
workflow_dispatch: # Allows manual triggering for testing
jobs:
run-brief:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Setup Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: pip install requests openai
- name: Run Agent
env:
NOTION_TOKEN: ${{ secrets.NOTION_TOKEN }}
NOTION_DB_ID: ${{ secrets.NOTION_DB_ID }}
OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
EMAIL_USER: ${{ secrets.EMAIL_USER }}
EMAIL_PASS: ${{ secrets.EMAIL_PASS }}
EMAIL_TARGET: ${{ secrets.EMAIL_TARGET }}
run: python main.pyReliability & Fallbacks
Cron jobs on GitHub Actions are not guaranteed to run at the exact second. During high load, they can be delayed by 10-20 minutes. For a daily brief, this is acceptable. However, scripts fail. API endpoints time out.
To build resilience, I implemented two safeguards:
- Retries in Code: The Python script uses a wrapper around the API calls with a retry logic (using the
tenacitylibrary) to handle transient network errors. - Failure Notification: If the GitHub Action fails (returns a non-zero exit code), GitHub sends an email to the repo owner. This acts as my "Dead Man's Switch." If I don't get the brief, I check my GitHub notifications to see the traceback.
The Result
Now, my morning loop is simple:
- Wake up.
- Make coffee.
- Open email.
- Read the AI Daily Brief.
- Start the "One Thing."
This system saves me about 20 minutes of organizational shuffling every morning. Over a year, that’s roughly 80 hours of reclaimed time—two full work weeks gained just by automating the context switch.
Building agents isn't always about complex autonomous loops; sometimes, it's just about delivering the right data, to the right place, at the right time.
Comments
Loading comments...