Wednesday, September 17, 2025

Linear equations in AI / machine learning

Equation in AI

In machine learning, the model often starts with a linear equation:

y = w_1x_1 + w_2x_2 + \dots + b

Inputs = features (e.g., number of rooms in a house, area in sq. ft, etc.)

Weights = importance given to each feature

Bias = baseline adjustment

Output = prediction (e.g., house price)

---

2. How Weights Are Learned

Initially, weights are set randomly (like guessing).

The model makes a prediction.

It compares prediction vs. actual answer (this difference = error/loss).

Using an algorithm like gradient descent, the model adjusts weights step by step to reduce error.

---

3. Simple Example: Predicting House Price

Equation:

Price = (w_1 \times \text{Area}) + (w_2 \times \text{Bedrooms}) + b

Suppose training data says:

A 1000 sq. ft, 2-bedroom house = $300k

A 2000 sq. ft, 3-bedroom house = $500k


The model might learn weights like:

 (each sq. ft adds $150)

 (each bedroom adds $20,000)

 (no baseline adjustment)

So:

Price = 150 \times \text{Area} + 20{,}000 \times \text{Bedrooms}
---

4. Intuition

If is large → Area matters a lot.

If is small → Bedrooms don’t influence much.

AI keeps tweaking weights until the predictions match reality closely.

---

👉 In short:

Weights = knobs AI turns to “tune” importance of inputs.

Training = the process of finding the best knob settings.

---

No comments:

Post a Comment

If we already have automation, what's the need for Agents?

“Automation” and “agent” sound similar — but they solve very different classes of problems. Automation = Fixed Instruction → Fixed Outcome ...