LinkedIn Highlights, June 2025 - AI Agents Edition
Build smarter AI agents with six open-source tools and a bonus toolkit that optimizes Llama prompts by 45%
Welcome to LinkedIn Highlights!
Each month, I'll share my five seven top-performing LinkedIn posts, bringing you the best of AI straight from the frontlines of academia and industry. This edition includes seven posts instead of five—there were just too many good ones to leave out!
As a frequent LinkedIn contributor, I regularly share insights on groundbreaking papers, promising open-source packages, and significant AI product launches. These posts offer more depth and detail than our weekly snippets, providing a comprehensive look at the latest AI developments.
Over the past few months, I’ve been developing and experimenting with several open-source packages while creating a few AI agents. This post covers six of my most popular LinkedIn posts on the topic, each showcasing the package’s core features and my key takeaways. Plus, a bonus tip at the end: a toolkit that automatically converts prompts from GPT and Claude into Llama-optimized versions, boosting performance by up to 45%.
Whether you're not on LinkedIn or simply missed a post, this monthly roundup ensures you stay informed about the most impactful AI news and innovations.
Recent posts:
(1) LangMem
LangMem is a new open-source library that gives LLM agents long-term memory and it’s refreshingly easy to use.
It’s built for developers working with LangGraph or custom agents, and it solves a persistent problem: how to make agents remember and adapt across sessions without bloated prompts or manual hacks.
LangMem introduces a clean memory API that works with any storage backend and includes tools for:
Storing important information during conversations - agents decide what matters and when to save it
Searching memory when relevant - retrieving facts, preferences, or prior context
Running background memory consolidation - automatically refining and updating knowledge over time
It integrates natively with LangGraph’s memory store, but you can also plug it into your own stack using Postgres, Redis, or in-memory stores.
This design is especially useful for building agents that need to:
-> Personalize interactions across sessions
-> Maintain consistency in long-running workflows
-> Adapt behavior based on evolving user input
Unlike Mem0, which requires explicit memory updates, LangMem handles memory automatically in the background, storing and retrieving key details as needed, and integrates with LangGraph out of the box.
GitHub repo https://github.com/langchain-ai/langmem
(2) Browser Use
LLM agents can read the web, but few can truly use it. This open-source package changes that.
Browser-use is an open-source library that turns any LLM into a browser-native agent, with first-class support for real UI actions and multi-step tasks.
Out of the box, it supports:
Direct interaction with Chromium via Playwright - no extra scripting layers or wrappers
Seamless LLM integration - use GPT-4o, DeepSeek-V3, Claude, Gemini, or even Grok
Ready-to-run UX - spin up agents with a single function, or test flows in a ready-to-run Web UI or CLI
Browser use can handle real-world tasks like:
-> Checking your latest Stripe payouts and updating a financial tracking sheet
-> Logging into your CMS, creating a draft blog post, and uploading media
-> Scraping product reviews across sites and summarizing them in a shared doc
-> Tracking changes to your competitors’ pricing pages and alerting your team
For those building AI agents that go beyond chat, this gives you a direct bridge to the real web, not a sandbox.
GitHub repo https://github.com/browser-use/browser-use
My recent post on coding with AI:
(3) OpenAI Agents SDK
OpenAI has one of the most useful frameworks for multi-agent workflows, and it’s open-source.
Building production-ready agent systems has been notoriously complex, requiring deep knowledge of orchestration patterns, handoff mechanisms, and debugging distributed AI behavior. The new OpenAI Agents SDK simplifies this complexity with a remarkably clean Python interface that handles the heavy lifting.
Why I find this framework so useful:
Provider-agnostic design - works with OpenAI's APIs plus 100+ other LLMs, so you're not locked into a single provider
Built-in handoffs - agents can seamlessly transfer control to specialized agents based on context, like routing Spanish queries to Spanish-speaking agents
Integrated tracing - every agent run is automatically tracked using popular tools such as AgentOps, Braintrust, and Arize AI Phoenix, making debugging multi-agent conversations straightforward instead of impossible
Guardrails by default - configurable safety checks for input and output validation prevent runaway behavior
Setting up a triage system that routes conversations to language-specific agents takes just a few lines of code, with the SDK handling message persistence, context switching, and execution flow automatically.
I'm particularly impressed by the tracing capabilities - the framework integrates with popular observability tools like Logfire, AgentOps, and Braintrust, giving you visibility into exactly what your agents are doing and why.
For developers who have been intimidated by the complexity of multi-agent architecture, this SDK removes the final barrier to building sophisticated agent workflows that actually work in production.
GitHub repo https://github.com/openai/openai-agents-python
(4) Agno
Most agent frameworks I've used struggle with performance at scale, but I recently tested one that achieves microsecond-level instantiation.
The math doesn’t lie: if each agent takes seconds to spin up and consumes megabytes of memory, running the thousands needed for complex workflows becomes infeasible.
A new library called Agno addresses this through architectural decisions that prioritize performance without sacrificing functionality. The framework supports 23+ model providers and implements a progressive five-level agent architecture, from basic tool-enabled agents to coordinated multi-agent workflows.
Key technical capabilities include:
Native multimodal processing - handles text, image, audio, and video inputs without additional preprocessing layers
First-class reasoning implementation - agents can explicitly "think through" problems using built-in reasoning tools or custom chain-of-thought approaches
Agentic search with hybrid retrieval - combines vector search with keyword matching and re-ranking for improved RAG performance
The performance difference is substantial. In head-to-head comparisons with LangGraph, Agno completes instantiation benchmarks before competing frameworks reach halfway through their measurement cycles.
Agno also includes pre-built FastAPI routes, structured output handling, session storage, and monitoring capabilities.
GitHub repo https://github.com/agno-agi/agno
(5) Agents Towards Production
A new, comprehensive, open-source playbook has just solved the biggest challenge in developing AI agents: transitioning from experimentation to production-ready systems.
Unlike scattered documentation or theoretical frameworks, this resource provides executable tutorials that guide you from zero to a working implementation in minutes.
The playbook covers the entire agent lifecycle:
Orchestration fundamentals - build multi-tool workflows with memory persistence and agent-to-agent messaging using frameworks like Xpander and LangChain
Production deployment - containerize agents with Docker, scale on GPU infrastructure via Runpod, or run on-premise with Ollama for privacy-sensitive applications
Security and observability - implement real-time guardrails against prompt injection, add comprehensive tracing with LangSmith and Qualifire, and automate behavioral testing
Advanced capabilities - enable dual-memory architectures with Redis for semantic search, integrate real-time web data through Tavily, and deploy agents as APIs with FastAPI
What makes this resource invaluable is its tutorial-first approach. Each concept comes with runnable notebooks and production-ready code.
Whether you're building customer service agents, research assistants, or autonomous workflows, the playbook provides tested patterns for tool integration, multi-agent coordination, and model customization.
GitHub repo https://github.com/NirDiamant/agents-towards-production
(6) Docling
Keep reading with a 7-day free trial
Subscribe to AI Tidbits to keep reading this post and get 7 days of free access to the full post archives.