TILs & Thoughts - Jan 2025
January 31, 2025
- Flight Tip: Access Meta's Llama LLM via WhatsApp (using available in-flight messaging/Wi-Fi) for brainstorming or thinking during flights.
- Refocused on building more: started tracking my build vs. consume ratio and resumed time logging to prioritize building activities.
- Getting older brings clarity and perspective on priorities, particularly regarding the use of time, energy, and focus.
- Reading more, especially technical and business books requiring highlights. Optimized my Kindle setup across devices: enabled vertical scrolling mode on iPhone, Mac, and PC apps. (Note: The iPhone app is useful for uploading EPUB/PDFs).
- Tested NotebookLM to create custom podcasts from learning materials for synthesis during commutes/workouts. Quality is inconsistent, with noticeable hallucinations (discussing topics outside the source material).
- ChatGPT effectively breaks down PDF books by chapter. Useful for limiting LLM context to specific sections (e.g., discussing or creating flashcards for only Chapter 17).
- Using Amazon Delivery Day shipping: it's slower but provides credits usable for Kindle/Audible purchases. A good trade-off for non-urgent items.
- Switched from iTerm2 to Ghostty as my primary Mac terminal app, reflecting increased CLI usage over the past year.
- Deepening my exploration of Stoicism. Still new, but reading 'The Daily Stoic' for the past year has been very helpful.
- Learned and applied CloudFormation, Packer, YAML, and system architecting concepts during the RFB project.
- Leveraging 'Cedric' (Amazon internal LLM) heavily for work-related writing: drafting, brainstorming structure, refining tone, removing filler words, and composing emails. This raises questions about the market for tools like Grammarly.
- Focusing on building end-to-end coding projects. While comfortable with product, Python, and SQL, I'm pushing myself to handle deployment, frontend, and pipelines, areas previously reliant on partners.
- Signed up for Perplexity Pro (annual) via an Xfinity membership offer.
- Resubscribed to Cursor Pro to accelerate coding projects.
- Tested reasoning models (o3-mini, R1) and was astounded by their step-by-step thinking and ability to handle constraints, especially for a synthetic data generation project.
- Typing speed feels increasingly critical with LLMs; input/output is often the bottleneck, not generation. Exploring voice-to-text for potentially higher efficiency.
- Using Raycast to accelerate workflows.
- Revised approach to note-taking: focus on ideation, not review/retention. Searchability is key. This differs from complex PKM systems (backlinks, etc.). Using UpNote after previously using Workflowy and Obsidian. (May write a separate post on this).
- Using Git more intentionally. With LLM-generated code, frequent snapshots and disciplined workflow management are crucial, similar to managing outsourced work (requiring clear requirements, expectations, tests).
- Worked on a personal side project: 'RFB' (Regular Flyer Buddy app). Phase 1 involved building the full-stack app locally; Phase 2 focused on AWS deployment.
- Interacting extensively with LLMs (like Cursor) after completing a project built with their help. Asking about the generated architecture and design choices significantly accelerates my understanding of the 'how' and 'why'.
- Tool Tip: RepoMix (https://repomix.com/) converts GitHub repos to XML, useful for feeding entire codebases into an LLM's context.
- Using multiple LLMs strategically as features/strengths diverge: Claude Projects for structured thinking/artifacts, Cursor for coding (cost-effective), and ChatGPT (free) for follow-up questions without disrupting the primary context. (Note: Sharing capabilities vary; ChatGPT is easy, Claude's improved, Cursor is limited).
- Lesson Learned: Complex, detailed prompts often perform poorly with LLMs (e.g., wasted 4 hours debugging Docker). A step-by-step, incremental approach works better, especially for tasks like Docker/AWS deployment. Break down complex requests.
- Realization: I likely interact with LLMs 5x more than with people. This constant interaction seems to be improving my ability to ask good questions and communicate clearly.
- Reasoning models are exceptionally good at creating synthetic data. Successfully built a synthetic data generation engine using these models, fulfilling a long-standing 'someday' project idea.
- Began designing and building a new forecasting architecture for work.