Artificial Intelligence: Foundations of Computational Agents
10 comments
·March 16, 2025simonw
nivertech
It always amazed me that different branches of CS, AI/ML and Complex Systems/Complexity Sciences have different views on agents.
Objects in OOP - something which can have properties/attributes, and methods (verbs/commands). Usually modeled after a real-life/domain enitites.
Aggregates in Domain-Driven Design (DDD) - transactional clusters of objects modeling a collections of entities in the domain.
Actors in Actor Model / Active Objects - a something we can sen messages to, and receive messages from, and which may have some business logic.
Agent-Based Modeling and Simulations (ABM) defines agents as a proxy for a decision maker.
Digital Twins - a more realistic proxy/replica for a real life person, object, or process.
Multi-Agent Systems (MAS) in how to use asgents to solve or optimize a real problem in production.
RL/MARL (Muti-Agent Reinforcement Learning) on how to train an ML algorithm without supervision (i.e. a labeled dataset), by placing agents in an environment capable to automatically provide rewards/punishment feedback.
LLM Agents - dynamically generated intelligent business process workflows (including Robotic Process Automation - RPA aka Tool Use/Function Call).
nine_k
So basically an agent is a procedure, by this definition: it takes parameters (environment) and acts upon that by executing side effects. An email filter is an agent. A database trigger is an agent.
nivertech
> it takes parameters (environment)
I think it's better to imagine agent as something that physically placed inside the Environment, and actually modifying/changing/mutating it in place.
> An email filter is an agent. A database trigger is an agent.
you're missing the "I" (Intelligence) part - the filtering logic in the email filter, or a business logic in the DB trigger/stored procedure/CGI script/AWS Lambda function/etc.
But yes, an agent doesn't have to be Intelligent, it can be a Dumb Agent / NPC / Zero-Intelligence Trader.
andirk
I was literally compiling a list of "agent" synonyms at lunch today. My favorite and most accurate so far is "doer".
throwaway81523
Book is from 2023, link should be edited for that.
null
bbor
A) Looks really good, will be checking it out in depth as I get time! Thanks for sharing.
B) The endorsements are interesting before you even get to the book; I know all textbooks are marketed, but this seems like quite the concerted effort. For example, take Judea Pearl's quote (an under-appreciated giant):
This revised and extended edition of Artificial Intelligence: Foundations of Computational Agents should become the standard text of AI education.
Talk about throwing down the gauntlet - especially since Russell looks up to him as a personal inspiration!(Quick context for those rusty on academic AI: Russell & Norvig's 1995 (4th ed in 2020) AI: A Modern Approach ("AIAMA") is the de facto book for AI survey courses, supposedly used in 1500 universities via 9 languages as of 2023.[1])
I might be reading drama into the situation that isn't necessary, but it sure looks like they're trying to establish a connectionist/"scruffy", ML-based, Python-first replacement for AIAMA's symbolic/"neat", logic-based, Lisp-first approach. The 1st ed hit desks in 2010, and the endorsements are overwhelmingly from scruffy scientists & engineers. Obviously, this mirrors the industry's overall trend[2]... at this point, most laypeople think AI is ML. Nice to see a more nuanced--yet still scruffy-forward--approach gaining momentum; even Gary Marcus is on board, a noted Neat!
C) ...Ok, after writing an already-long comment (sorry) I did a quantitative comparison of the two books, which I figured y'all might find interesting! I'll link a screenshot[3] and the Google Sheet itself[4] below, but here's some highlights b/w "AMA" (the reigning champion) and "FCA" (the scrappy challenger):
1. My thesis was definitely correct; by my subjective estimation, AMA is ~6:3 neat:scruffy (57%:32%), vs. a ~3:5 ratio for FCA (34%:50%).
2. My second thesis is also seemingly correct: FCA dedicates the last few pages of every section to "Social Impact", aka ethics. Both books discuss the topic in more depth in the conclusion, representing ~4% of each.
3. FCA seems to have some significant pedagogical advantages, namely length (797 pages vs. AMA's 1023 pages) and the inclusion of in-text exercises at the end of every section.
4. Both publish source code in multiple languages, but AMA had to be ported to Python from Lisp, whereas FCA is natively in Python (which, obviously, dominates AI atm). The FCA authors actually wrote their own "psuedo-code" Python library, which is both concerning and potentially helpful.
5. Finally, FCA includes sections explicitly focused on data structures, rather than just weaving them into discussions of algorithms & behavioral patterns. I for one think this is a really great idea, and where I see most short-term advances in unified (symbolic + stochastic) AI research coming from! Lots of gold to be mined in 75 years of thought.
Apologies, as always, for the long comment -- my only solace is that you can quickly minimize it. I should start a blog where I can muse to my heart's content...
TL;DR: This new book is shorter, more ML-centric, and arguably uses more modern pedagogical techniques; in general, it seems to be a slightly more engineer-focused answer to Russell & Norvig's more academic-focused standard text.
[1] AIAMA: https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...
[2] NGRAM: https://books.google.com/ngrams/graph?content=%28Machine+Lea...
[3] Screenshot: https://imgur.com/a/x8QMbno
[4] Google Sheet: https://docs.google.com/spreadsheets/d/1Gw9lxWhhTxjjTstyAKli...
Because I collect definitions of "agent", here's the one this book uses:
> An agent is something that acts in an environment; it does something. Agents include worms, dogs, thermostats, airplanes, robots, humans, companies, and countries.
https://artint.info/3e/html/ArtInt3e.Ch1.S1.html
I think of this as the "academic" definition, or sometimes the "thermostat" definition (though maybe I should call it the "worms and dogs" definition).
Another common variant of it is from Peter Norvig and Stuart Russell's classic AI text book "Artificial Intelligence: A Modern Approach": http://aima.cs.berkeley.edu/4th-ed/pdfs/newchap02.pdf
> Anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators