Skip to content

Tensorix

Overview

Tensorix is an EU-sovereign AI inference platform providing private access to open-source language models. Founded in 2024 and registered in Ireland (Tensorix Ltd.), the company positions itself as a privacy-first, GDPR-compliant alternative to US-based AI providers. All infrastructure runs exclusively in Dublin and Helsinki data centres, avoiding US Cloud Act jurisdiction. They emphasize zero data retention, transparent compliance, and cost savings of 60-88% vs OpenAI.

Markets

  • Enterprise AI inference — companies needing production LLM access with strong privacy guarantees
  • EU-regulated industries — finance, healthcare, government requiring GDPR compliance and data residency
  • Cost-sensitive teams — developers looking to reduce inference costs by switching from proprietary to open-source models
  • Geographic focus: EU-first, particularly organisations subject to European data sovereignty requirements

Products

  • Shared inference API — pay-as-you-go access to 14 confirmed LLMs + 2 audio models via OpenAI-compatible and Anthropic-compatible endpoints
  • Audio API — TTS (chatterbox-turbo) and STT (faster-whisper-large-v3) via OpenAI-compatible endpoints
  • Dedicated inference — isolated compute for enterprise customers (details limited on public site)
  • Key differentiators: zero data retention, EU-only infrastructure, OpenAI SDK drop-in compatibility (change base_url only), publicly available DPA and sub-processor list
  • Integrations ecosystem: 30+ documented integrations including Claude Code, Cursor, Cline, LangChain, LlamaIndex, Vercel AI SDK, n8n, Zapier, Dify, and more

Supported Models

Provider Model Model ID Context Notes
Z-AI GLM-5.1 z-ai/glm-5.1 203K Featured model, recommended for Claude Code
Z-AI GLM-4.6 z-ai/glm-4.6 203K General purpose, bilingual
MiniMax MiniMax-M2.5 minimax/minimax-m2.5 197K Reasoning, functions
MiniMax MiniMax-M2 minimax/minimax-m2 197K Coding, fast
Moonshot AI Kimi-K2.5 moonshotai/kimi-k2.5 262K Vision, functions
DeepSeek DeepSeek-V3.1 deepseek/deepseek-chat-v3.1 164K General chat/coding
DeepSeek DeepSeek-V3.2 deepseek/deepseek-v3.2 164K Fast responses
DeepSeek DeepSeek-R1-0528 deepseek/deepseek-r1-0528 164K Complex reasoning
Alibaba Qwen3-235B qwen/qwen3-235b-a22b-2507 131K Large MoE model
Alibaba Qwen3-Coder-30B qwen/qwen3-coder-30b-a3b-instruct 262K Coding specialist
Meta Llama-3.3-70B meta-llama/llama-3.3-70b-instruct 131K Instruction-tuned
Meta Llama-4-Maverick meta-llama/llama-4-maverick 1050K Largest context (1M+)
OpenAI GPT-OSS-120B openai/gpt-oss-120b 131K Open-source, reasoning
OpenAI GPT-OSS-20B openai/gpt-oss-20b 131K Open-source, lightweight
chatterbox-turbo chatterbox-turbo TTS (text-to-speech)
Systran faster-whisper-large-v3 Systran/faster-whisper-large-v3 STT (speech-to-text)

Last verified: 2026-04-12

Key Capabilities

Capability Status Notes
OpenAI SDK compatibility Drop-in base_url swap
Anthropic SDK compatibility Separate endpoint: api.tensorix.ai
Streaming Standard SSE
Zero data retention Core differentiator
EU data residency Dublin + Helsinki only
GDPR compliance Irish-registered, public DPA
Dedicated inference Enterprise offering
Function calling Documented in API reference
Audio API (TTS/STT) chatterbox-turbo (TTS), faster-whisper-large-v3 (STT)
Caching Not documented
Fallback/routing "Intelligent routing and automatic failover" claimed
Rate limits 60 RPM, 2M TPM per key; enterprise custom limits
Usage dashboard app.tensorix.ai/dashboard/usage
Team accounts Documented in docs
Observability/logging Not documented beyond usage dashboard

Last verified: 2026-04-04

Pricing

Tier Price Notes
Pay-as-you-go From $0.15/M tokens No monthly fees, no minimums
TTS (chatterbox-turbo) $0.000015/character ~$0.15 per 10K chars
Dedicated inference Custom Enterprise, contact sales

Last verified: 2026-04-04

URLs to Monitor

URL Label Notes
https://tensorix.ai/sitemap.xml Sitemap Full site structure
https://tensorix.ai/pricing Pricing Pricing page
https://tensorix.ai/models Models Model catalogue
https://docs.tensorix.ai/ Docs Documentation hub
https://tensorix.ai/trust Trust Compliance and security
https://tensorix.ai/dedicated-inference Dedicated Inference Enterprise offering
https://tensorix.ai/blog Blog Announcements
https://tensorix.ai/alternatives/openai vs OpenAI Competitive positioning

Strategy

  • Privacy-as-moat: Tensorix leads with zero data retention and EU sovereignty as primary differentiators, targeting the growing segment of EU enterprises re-evaluating US cloud dependencies
  • Open-source model focus: Exclusively serves open-source models (now including OpenAI's OSS releases), positioning against vendor lock-in from proprietary providers
  • Drop-in migration: Emphasises one-line SDK migration from OpenAI, reducing switching costs
  • Compliance-forward: Publicly available DPA, sub-processor list, and SLA — unusual transparency for the space
  • NVIDIA Inception member: Part of NVIDIA's startup programme, suggesting GPU/infrastructure partnership
  • Content marketing: Active blog comparing against OpenAI and Anthropic, publishing EU AI Act guides, positioning as thought leader on sovereign AI

Formidability

Score: 4/10

Tensorix occupies a real niche (EU-sovereign inference) and has significantly expanded its model catalogue from ~8 to 14 confirmed LLMs including Llama 4 Maverick (1M+ context), OpenAI OSS models, and Qwen 3. No visible observability features and limited documentation remain weaknesses. The EU sovereignty angle is compelling for a specific segment but restricts TAM. They compete more directly with OpenAI's API than with OpenRouter — they route to open-source models only, while OpenRouter offers both proprietary and open-source across many more providers. Low overlap with OpenRouter's core value prop (unified access to all models/providers), but worth monitoring as EU data sovereignty demand grows and their model catalogue expands.