Skip to main content

AI Security Principles in NEXT AI

NEXT AI is designed to remain secure even when LLMs behave in unexpected ways.

Ronny avatar
Written by Ronny
Updated today

Large language models (LLMs) are powerful, but they introduce new security questions such as prompt injection, prompt leakage (prompt extraction), and jailbreak attempts. NEXT AI is built to remain secure even when models behave in unexpected ways.

TL;DR

  • LLMs are not a security boundary. We assume prompts can be influenced or partially visible.

  • Security is enforced outside the model. Access control and data isolation happen before any model call.

  • Prompts contain no secrets, credentials, permission logic, or hidden tooling.

  • Only authorized data is ever sent to the model.

  • Integrations are explicit and user-controlled.

  • Optional PII reduction happens before storage and before AI processing.

  • Prompt-like output can look confusing, but it does not expand access or expose protected data.

1. We assume LLMs are not a security boundary

There is no complete, guaranteed way today to prevent prompt injection or prompt extraction at the model level. That means anything placed in prompts should be treated as potentially observable, manipulable, or extractable.

Because of this, NEXT AI does not rely on prompts being secret to stay secure. The system is designed so that prompt visibility does not create risk.

2. Security is enforced before the model sees any data

All security-critical decisions are made outside the LLM. Before any model call, NEXT AI enforces:

  • Authentication and permissions

  • Teamspace and workspace isolation

  • Scoping and filters (for example, limiting a chat to a subset of data)

Result: the model only receives data the user is explicitly authorized to access. Data outside that scope is never sent to the model and therefore cannot be accessed through prompting.

3. System prompts are minimal and non-sensitive

System prompts in NEXT AI are intentionally small and focused. They do not contain:

  • Credentials or API keys

  • Permission rules or access control logic

  • Internal control-plane instructions

  • Hidden tool definitions that would enable misuse

Even in a worst-case scenario where prompt-like text becomes visible, it does not grant additional permissions or access.

4. Prompt-like output and trust

Sometimes users may see model output that looks like system instructions, especially if they explicitly ask NEXT AI Chat to "show" or "use" a system prompt.

This can feel surprising, but it is a known LLM behavior. Importantly:

  • It does not indicate loss of control

  • It does not expose protected data

  • It does not expand permissions or capabilities

Even when there is no security impact, we treat this seriously because it can create confusion and reduce trust. Our goal is for NEXT AI Chat to refuse requests that try to elicit internal instructions rather than generating prompt-looking text.

5. Data sharing only happens when you configure it

By default, your data stays inside your NEXT AI workspace and is used only within the product.

If you connect NEXT AI to external systems (exports, automations, integrations, MCP clients, or tools like GitHub, Jira, or email), those connections are explicitly configured by you. NEXT AI does not ship with third-party sharing enabled by default.

When you choose to send data to an external system, you control where that data goes and are responsible for ensuring the destination meets your security requirements.

6. Personal data (PII) is reduced before AI processing

NEXT AI does not rely on an LLM to handle sensitive personal data safely.

When PII reduction is enabled:

  • It happens at ingestion time

  • Before the data is stored

  • Before any AI model processes it

That means the model never sees the original personal data and cannot reveal it regardless of prompting.

7. Identity and permissions travel with every request

When you use NEXT AI, including NEXT AI Chat, your identity and permissions are enforced end-to-end. Any internal service that loads data does so in the scope of your access rights.

This also applies to more advanced agent-like workflows involving multiple internal components. No step can access data outside your permissions.

8. Transparency supports trust

NEXT AI is designed to be understandable and reviewable. In many workflows, the product can show what happened, such as which steps ran and what data was used, helping teams audit and verify behavior.

Transparency is intentional and designed. It is not the same as exposing raw internal prompts or control logic.

What to do if you see prompt-like text

If you see output that looks like system-level instructions and you are unsure:

  1. Save the message or take a screenshot

  2. Contact support and we will review it with you

Summary

NEXT AI uses a conservative security approach. We assume models can be manipulated and prompts may leak. That is why security is enforced by architecture, not by prompt secrecy. Prompt-like output does not change what NEXT AI can access or do, and your data remains protected by permissions and isolation enforced outside the model.

Did this answer your question?