Tuesday, October 28, 2025

If we already have automation, what's the need for Agents?

“Automation” and “agent” sound similar — but they solve very different classes of problems.

Automation = Fixed Instruction → Fixed Outcome

  • Like Zapier, IFTTT, Jenkins pipelines, cron jobs.

  • You pre-define exact triggers, actions, rules.

  • Great when:

    • Context is stable.

    • No judgment / interpretation is needed.

    • The world doesn’t change mid-execution.

Example:

“Every day at 5pm, send me a sales report.”
✅ Perfect automation — zero thinking needed.

Agent = Goal → Autonomous Decision-Making

  • Given a goal, not just rules.

  • Perceives, plans, adapts, self-corrects, retries, negotiates ambiguity.

  • Can operate even when instructions are incomplete or circumstances change.

  • Doesn’t need babysitting.

Example:

“Grow my revenue 15% next quarter — find the best channels, experiment, and adjust.”

✅ That’s NOT automatable. Needs strategy, improvisation, learning, resource orchestration. 

Understanding token size

 


LLM - where are the parameters stored, and the file system

 

What is an LLM? Is it a set of files? Does it sit as an .exe? A folder? A single binary? What does it LOOK LIKE if I download it?”

Answer: YES — an LLM is literally a set of files.
A big model file — like .bin, .pth, .safetensors, etc. — usually 2GB to 400GB+.

Parameters live inside the model — not in vector DB.

Vector DB only stores embeddings of user/business knowledge for retrieval.

Monday, October 13, 2025

Amazon Bedrock Guardrails

Amazon Bedrock Guardrails provides safeguards that you can configure for your generative AI applications based on your use cases and responsible AI policies. You can create multiple guardrails tailored to different use cases and apply them across multiple foundation models (FMs), providing a consistent user experience and standardizing safety and privacy controls across generative AI applications. You can use guardrails for both model prompts and responses with natural language.

You can use Amazon Bedrock Guardrails in multiple ways to help safeguard your generative AI applications. For example:

  • A chatbot application can use guardrails to help filter harmful user inputs and toxic model responses.

  • A banking application can use guardrails to help block user queries or model responses associated with seeking or providing investment advice.

  • A call center application to summarize conversation transcripts between users and agents can use guardrails to redact users’ personally identifiable information (PII) to protect user privacy.

Amazon Bedrock Guardrails provides the following safeguards (also known as policies) to detect and filter harmful content:

  • Content filters – Detect and filter harmful text or image content in input prompts or model responses. Filtering is done based on detection of certain predefined harmful content categories: Hate, Insults, Sexual, Violence, Misconduct and Prompt Attack. You also can adjust the filter strength for each of these categories.

  • Denied topics – Define a set of topics that are undesirable in the context of your application. The filter will help block them if detected in user queries or model responses.

  • Word filters – Configure filters to help block undesirable words, phrases, and profanity (exact match). Such words can include offensive terms, competitor names, etc.

  • Sensitive information filters – Configure filters to help block or mask sensitive information, such as personally identifiable information (PII), or custom regex in user inputs and model responses. Blocking or masking is done based on probabilistic detection of sensitive information in standard formats in entities such as SSN number, Date of Birth, address, etc. This also allows configuring regular expression based detection of patterns for identifiers.

  • Contextual grounding checks – Help detect and filter hallucinations in model responses based on grounding in a source and relevance to the user query.

  • Automated Reasoning checks – Can help you validate the accuracy of foundation model responses against a set of logical rules. You can use Automated Reasoning checks to detect hallucinations, suggest corrections, and highlight unstated assumptions in model responses.

x

If we already have automation, what's the need for Agents?

“Automation” and “agent” sound similar — but they solve very different classes of problems. Automation = Fixed Instruction → Fixed Outcome ...