0
Oraclous/Architecture

Threelayers.

Oneplatform.

Oraclous is built in three composable layers. Layer 1 stores your domain knowledge. Layer 2 gives agents the tools to act on it. Layer 3 runs the autonomous fine-tuning loop on top.

L3FTOps Agent Team
10-Stage LoopHITL GatesRetrain Scheduling
L2Graph-Native Agent Framework
15+ MCP ToolsAgent MemoryRAGAS Eval
L1Multi-Tenant Knowledge Graph
Neo4jBitemporalCross-Graph Federation
L3

FTOps Agent Team

The autonomous fine-tuning operation

10-Stage LoopHITL GatesRetrain SchedulingDrift DetectionLoRA / QLoRA / DPOORPORAGAS Gating
L2

Graph-Native Agent Framework

The infrastructure agents run on

15+ MCP ToolsAgent MemoryRAGAS EvalCredential BrokerVisual Flow StudioStandalone Product
L1

Multi-Tenant Knowledge Graph

The data foundation everything builds on

Neo4jBitemporalCross-Graph FederationReBAC ACLZero-Copy VersioningSAME_AS Deduplication
L3

FTOps Agent Team

18 specialist agents that automate the complete fine-tuning lifecycle — from knowledge graph analysis to model deployment and drift monitoring.

Pipeline stages
10
Specialist agents
18
Training methods
SFT · LoRA · QLoRA · DPO · ORPO
HITL checkpoints
Every stage gate
10-Stage LoopHITL GatesRetrain SchedulingDrift DetectionLoRA / QLoRA / DPOORPORAGAS Gating

The 10-Stage Loop

Every fine-tuning cycle follows a deterministic 10-stage pipeline: Connect → Structure → Analyze → Research → Curate → Select → Train → Evaluate → Deploy → Monitor. Each stage is owned by a dedicated agent. The loop re-enters at Stage 3 when the Monitor Agent detects drift above threshold.

Human-in-the-Loop Gates

HITL approval UIs appear at every critical decision point — dataset review, strategy selection, training launch, deployment promotion. Operators can inspect, modify, or override agent decisions before proceeding. No stage advances without explicit approval unless you configure fully autonomous mode.

Training Method Selection

The Strategy Agent (Stage 6) chooses the optimal training technique based on dataset characteristics, target capability, and compute constraints. Supported methods: SFT (supervised fine-tuning), LoRA, QLoRA, full fine-tune, DPO (direct preference optimization), and ORPO. Hyperparameters are generated, not guessed.

View agent team →Roadmap →
← See the 18 AgentsBook a Strategy Call →View the Roadmap →