Tag Archives: cybersecurity

Prompt Injection Is the New SQL Injection — So I Built a Firewall for It

If you’re routing user input into an LLM without a security layer, you’re basically running a web app in 2003 without input sanitization. Prompt injection is not a theoretical threat — it’s an active attack vector, and most developers are ignoring it entirely. I got tired of waiting for someone else to solve it, so I built Sentinel.

What’s the actual threat?

Prompt injection is when an attacker embeds instructions inside user input that hijack your LLM’s behavior. There are two flavors. Direct injection is when the user themselves crafts input to override your system prompt — “Ignore all previous instructions and…” Indirect injection is when malicious instructions are embedded in content your AI reads — a webpage, a document, an email — and your agent executes them without the user even knowing. The second one is the scary one. As agentic AI workflows proliferate (think n8n pipelines, AutoGPT-style agents, AI email assistants), the attack surface explodes. Your AI isn’t just answering questions anymore — it’s taking actions.

Why existing defenses fall short

Most teams either do nothing, or bolt on a simple keyword blocklist. Neither works well. Blocklists are trivially bypassed with rephrasing, encoding, or language switching. LLMs are specifically designed to follow instructions — that’s the vulnerability and the feature. Context matters enormously — “ignore” is fine in most sentences, malicious in others. What you actually need is layered detection that understands intent, not just keywords.

How Sentinel works

Sentinel sits as a proxy in front of your LLM endpoint. Every request passes through a four-tier pipeline before it ever reaches your model. The regex layer is fast and cheap, catching the obvious stuff immediately. Embedding similarity compares input against a vector database of known injection patterns, catching rephrased variants the regex misses. Content neutralization attempts to strip or defuse suspicious instructions while preserving legitimate intent. Finally, a proprietary analysis layer makes a contextual judgment on anything that survived the first three tiers — this is where intent is evaluated, not just pattern matched. Sentinel operates at line speed and is non-blocking by default, so it won’t add meaningful latency to your stack.

Privacy by design

Request content is never stored. Your dashboard shows threat scores, actions, and metadata, but the actual payload is gone the moment it’s evaluated. Full detail logging is opt-in only, for users who want to contribute to improving detection. Nothing is retained without your explicit choice.

The architecture

FastAPI for the proxy layer, PostgreSQL for persistent threat logging, Redis for rate limiting and caching, and Nginx Proxy Manager for routing. Intentionally boring infrastructure choices — the kind that actually runs reliably at 3am without waking you up.

Who this is for

Developers exposing LLM endpoints to user input, teams building agentic workflows where the AI takes real-world actions, self-hosters who want a security layer without sending everything to a third-party API, and anyone who’s heard “prompt injection” and thought I should probably do something about that.

What defenses, if any, are you currently running in front of your AI endpoints?

Big Tech Job Cuts – where to go now?

Google is one of the latest companies to cut tech jobs. Some 12,000 jobs this week.

With big tech realing from some historic job losses, many tech workers maybe wondering where they should go from here.

For myself, I’ve been studying for the last 10 months or so, Cybersecurity. Like many large enterprise organisations, I’ve seen a tremendous uptick in Security related incidents. At some point I decided to turn my attention on the subject.

As I would find out some 10 months later, it’s a very good thing I did. I was one of the many hundreds of thousands that have been affected by the recent layoffs from Big Tech globally.

I’ve been doing some studying recently to see what fields are popping for the mid-term and long-term future. Here are some interesting results.

  1. Data science and analysis: With the increasing amount of data being generated, there is a growing demand for professionals who can collect, analyze, and interpret data.
  2. Cybersecurity: As technology becomes more prevalent, the need for professionals who can protect against cyber threats also increases.
  3. Cloud computing: As more companies move their operations to the cloud, there is a growing need for professionals who can design, build, and maintain cloud-based systems.
  4. Artificial intelligence and machine learning: As these technologies continue to advance, there will be a growing demand for professionals who can develop and apply AI and ML solutions to various industries.
  5. Business development and consulting: With the changing business landscape, there is a need for professionals who can help companies navigate and adapt to new technologies and market trends.
  6. Project and product management: This role involves leading cross-functional teams to deliver a product or service.

While some of these jobs maybe common sense alternatives, some might be surprising. For example, Project and product management. I think maybe for the short-term this might be important, but long term I’m not so sure.

Some of the obvious ones are AI and Cybersecurity however. I would also say that anything supporting Cloud computing is quite important. This is because so many new tools need to be made in this area to handling integration from analytics, cybersecurity and more.

If you are one of the many tech workers who has lost their jobs, I feel your pain. I put this list together to share some hope for a bright new future though. Wishing you all the best of luck as we navigate these new and uncertain times ahead.