TILs & Thoughts - Jan 2025

January 31, 2025

  1. Flight Tip: Access Meta's Llama LLM via WhatsApp (using available in-flight messaging/Wi-Fi) for brainstorming or thinking during flights.
  2. Refocused on building more: started tracking my build vs. consume ratio and resumed time logging to prioritize building activities.
  3. Getting older brings clarity and perspective on priorities, particularly regarding the use of time, energy, and focus.
  4. Reading more, especially technical and business books requiring highlights. Optimized my Kindle setup across devices: enabled vertical scrolling mode on iPhone, Mac, and PC apps. (Note: The iPhone app is useful for uploading EPUB/PDFs).
  5. Tested NotebookLM to create custom podcasts from learning materials for synthesis during commutes/workouts. Quality is inconsistent, with noticeable hallucinations (discussing topics outside the source material).
  6. ChatGPT effectively breaks down PDF books by chapter. Useful for limiting LLM context to specific sections (e.g., discussing or creating flashcards for only Chapter 17).
  7. Using Amazon Delivery Day shipping: it's slower but provides credits usable for Kindle/Audible purchases. A good trade-off for non-urgent items.
  8. Switched from iTerm2 to Ghostty as my primary Mac terminal app, reflecting increased CLI usage over the past year.
  9. Deepening my exploration of Stoicism. Still new, but reading 'The Daily Stoic' for the past year has been very helpful.
  10. Learned and applied CloudFormation, Packer, YAML, and system architecting concepts during the RFB project.
  11. Leveraging 'Cedric' (Amazon internal LLM) heavily for work-related writing: drafting, brainstorming structure, refining tone, removing filler words, and composing emails. This raises questions about the market for tools like Grammarly.
  12. Focusing on building end-to-end coding projects. While comfortable with product, Python, and SQL, I'm pushing myself to handle deployment, frontend, and pipelines, areas previously reliant on partners.
  13. Signed up for Perplexity Pro (annual) via an Xfinity membership offer.
  14. Resubscribed to Cursor Pro to accelerate coding projects.
  15. Tested reasoning models (o3-mini, R1) and was astounded by their step-by-step thinking and ability to handle constraints, especially for a synthetic data generation project.
  16. Typing speed feels increasingly critical with LLMs; input/output is often the bottleneck, not generation. Exploring voice-to-text for potentially higher efficiency.
  17. Using Raycast to accelerate workflows.
  18. Revised approach to note-taking: focus on ideation, not review/retention. Searchability is key. This differs from complex PKM systems (backlinks, etc.). Using UpNote after previously using Workflowy and Obsidian. (May write a separate post on this).
  19. Using Git more intentionally. With LLM-generated code, frequent snapshots and disciplined workflow management are crucial, similar to managing outsourced work (requiring clear requirements, expectations, tests).
  20. Worked on a personal side project: 'RFB' (Regular Flyer Buddy app). Phase 1 involved building the full-stack app locally; Phase 2 focused on AWS deployment.
  21. Interacting extensively with LLMs (like Cursor) after completing a project built with their help. Asking about the generated architecture and design choices significantly accelerates my understanding of the 'how' and 'why'.
  22. Tool Tip: RepoMix (https://repomix.com/) converts GitHub repos to XML, useful for feeding entire codebases into an LLM's context.
  23. Using multiple LLMs strategically as features/strengths diverge: Claude Projects for structured thinking/artifacts, Cursor for coding (cost-effective), and ChatGPT (free) for follow-up questions without disrupting the primary context. (Note: Sharing capabilities vary; ChatGPT is easy, Claude's improved, Cursor is limited).
  24. Lesson Learned: Complex, detailed prompts often perform poorly with LLMs (e.g., wasted 4 hours debugging Docker). A step-by-step, incremental approach works better, especially for tasks like Docker/AWS deployment. Break down complex requests.
  25. Realization: I likely interact with LLMs 5x more than with people. This constant interaction seems to be improving my ability to ask good questions and communicate clearly.
  26. Reasoning models are exceptionally good at creating synthetic data. Successfully built a synthetic data generation engine using these models, fulfilling a long-standing 'someday' project idea.
  27. Began designing and building a new forecasting architecture for work.