Tutorial: Integrating Audit Logging (Splunk/Datadog)
This tutorial follows the Adding Auth0 Integration (Modular) guide. We will replace the ConsoleAuditLogStore
with a production-oriented implementation that sends detailed audit records to an external observability platform: Splunk or Datadog.
Prerequisites:
- Completion of the previous Modular Auth0 Integration Tutorial.
- Access to either:
- A Splunk instance (Cloud trial, free tier, or on-prem) with an HTTP Event Collector (HEC) endpoint configured and its Token and URL.
- A Datadog account (trial or paid) with an API Key and knowledge of your site’s Logs API endpoint.
- Familiarity with making HTTP POST requests in Node.js.
To configure the governed MCP server to send structured AuditRecord
data to Splunk or Datadog, replacing console logging for a more robust and searchable audit trail.
Step 8: Install HTTP Client
We need a reliable way to send HTTP POST requests from our Node.js server to the Splunk/Datadog API endpoints. axios
is a popular choice.
In your my-governed-mcp-app
project directory:
npm install axios
Step 9 (Option A): Implement SplunkAuditLogStore
This class implements the AuditLogStore
interface, specifically formatting and sending audit data to a Splunk HEC endpoint.
Create src/auditing/splunk-audit-log-store.ts
Create a new directory src/auditing
and place the following code inside splunk-audit-log-store.ts
.
import { AuditLogStore, AuditRecord, Logger } from '@ithena-one/mcp-governance'; // Adjust path
import axios from 'axios';
interface SplunkStoreConfig {
hecUrl: string; // e.g., https://your-splunk-instance:8088/services/collector
hecToken: string; // Your Splunk HEC token
source?: string; // Optional: HEC source field
sourceType?: string; // Optional: HEC sourcetype field (e.g., _json)
index?: string; // Optional: HEC index field
logger?: Logger; // Optional logger instance
}
export class SplunkAuditLogStore implements AuditLogStore {
private readonly config: SplunkStoreConfig;
private readonly logger: Logger;
constructor(config: SplunkStoreConfig) {
if (!config.hecUrl || !config.hecToken) {
throw new Error('Splunk HEC URL and Token must be provided.');
}
this.config = {
sourceType: '_json', // Default to JSON sourcetype
...config // Merge user config over defaults
};
this.logger = config.logger || console;
this.logger.info?.('SplunkAuditLogStore configured.', { url: this.config.hecUrl, source: this.config.source, sourceType: this.config.sourceType, index: this.config.index });
}
async log(record: AuditRecord): Promise<void> {
// Splunk HEC format: { "event": <your_event_data>, "source": ..., "sourcetype": ..., "index": ... }
const payload = {
event: record, // Send the whole AuditRecord as the event data
...(this.config.source && { source: this.config.source }),
...(this.config.sourceType && { sourcetype: this.config.sourceType }),
...(this.config.index && { index: this.config.index }),
};
try {
await axios.post(this.config.hecUrl, payload, {
headers: {
'Authorization': `Splunk ${this.config.hecToken}`,
'Content-Type': 'application/json',
},
// Optional: Add timeout, httpsAgent for self-signed certs etc.
});
this.logger.debug?.(`Successfully sent audit record to Splunk HEC`, { eventId: record.eventId });
} catch (error: any) {
// Log failure but don't throw - audit logging shouldn't break the app
this.logger.error?.('Failed to send audit record to Splunk HEC', {
eventId: record.eventId,
error: error.message,
status: error.response?.status,
// data: error.response?.data // Be cautious logging response data
});
}
}
// Optional initialize/shutdown methods if needed (e.g., for batching)
// async initialize(): Promise<void> { ... }
// async shutdown(): Promise<void> { ... }
}
Replace placeholders for hecUrl
and hecToken
later when instantiating. Load these securely (e.g., from environment variables). Ensure your Splunk HEC endpoint is reachable and the token is valid.
Splunk HEC Docs
Step 9 (Option B): Implement DatadogAuditLogStore
This class implements the AuditLogStore
interface, specifically formatting and sending audit data to the Datadog Logs API.
Create src/auditing/datadog-audit-log-store.ts
Create a new directory src/auditing
and place the following code inside datadog-audit-log-store.ts
.
import { AuditLogStore, AuditRecord, Logger } from '@ithena-one/mcp-governance'; // Adjust path
import axios from 'axios';
interface DatadogStoreConfig {
apiKey: string; // Your Datadog API Key
site?: string; // Your Datadog site (e.g., 'datadoghq.com', 'datadoghq.eu'), defaults to 'datadoghq.com'
source?: string; // Optional: Datadog source tag (e.g., 'mcp-governance')
service?: string; // Optional: Datadog service tag (e.g., your app name)
hostname?: string; // Optional: Datadog hostname tag
tags?: Record<string, string>; // Optional: Additional Datadog tags
logger?: Logger; // Optional logger instance
}
export class DatadogAuditLogStore implements AuditLogStore {
private readonly config: DatadogStoreConfig;
private readonly logger: Logger;
private readonly apiUrl: string;
constructor(config: DatadogStoreConfig) {
if (!config.apiKey) {
throw new Error('Datadog API Key must be provided.');
}
const site = config.site || 'datadoghq.com'; // Default site
this.apiUrl = `https://http-intake.logs.${site}/api/v2/logs`;
this.config = {
source: 'mcp-governance', // Sensible default source
...config // Merge user config
};
this.logger = config.logger || console;
this.logger.info?.('DatadogAuditLogStore configured.', { apiUrl: this.apiUrl, source: this.config.source, service: this.config.service });
}
async log(record: AuditRecord): Promise<void> {
// Datadog Logs API expects an array of log entries
// We add common Datadog attributes (ddsource, service, hostname, ddtags)
// The original AuditRecord becomes the 'message' payload (or nested fields)
const ddtags = [
`eventId:${record.eventId}`,
...(this.config.tags ? Object.entries(this.config.tags).map(([k, v]) => `${k}:${v}`) : [])
].join(',');
const payload = [{
ddsource: this.config.source,
ddtags: ddtags,
hostname: this.config.hostname || record.serviceIdentifier || 'unknown', // Use serviceIdentifier if hostname missing
service: this.config.service || record.serviceIdentifier || 'mcp-server', // Use serviceIdentifier if service missing
message: JSON.stringify(record), // Send the full audit record as the message content initially
// Alternatively, map specific AuditRecord fields to top-level Datadog attributes:
// timestamp: record.timestamp,
// status: record.outcome.status, // Map status for faceting
// mcp_method: record.mcp.method,
// user_id: typeof record.identity === 'string' ? record.identity : record.identity?.id,
// ... other mapped fields ...
// audit_details: record // Nest the remaining details
}];
try {
await axios.post(this.apiUrl, payload, {
headers: {
'DD-API-KEY': this.config.apiKey,
'Content-Type': 'application/json',
},
// Optional: Add timeout etc.
});
this.logger.debug?.(`Successfully sent audit record to Datadog Logs API`, { eventId: record.eventId });
} catch (error: any) {
this.logger.error?.('Failed to send audit record to Datadog Logs API', {
eventId: record.eventId,
error: error.message,
status: error.response?.status,
// data: error.response?.data
});
}
}
// Optional initialize/shutdown methods if needed (e.g., for batching)
}
Replace placeholders for apiKey
and potentially site
when instantiating. Load the API key securely (e.g., from environment variables). Ensure you use the correct Datadog site endpoint. Consider mapping specific fields from AuditRecord
to top-level Datadog attributes for better indexing and searching if needed.
Datadog Logs API Docs
Step 10: Update governed-app.ts
to Use New Audit Store
We now configure the GovernedServer
to use your chosen Splunk or Datadog audit store implementation instead of the console logger.
Modify src/governed-app.ts
:
// src/governed-app.ts
import { Server as BaseServer } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js'; // Needs header support for real testing
import { z } from 'zod';
import process from 'node:process';
import jwt from 'jsonwebtoken';
import jwksClient from 'jwks-rsa';
// --- Import Governance SDK, Auth0 Modules & NEW Audit Module ---
import { /* ... other governance imports ... */ } from '@ithena-one/mcp-governance';
import { Auth0IdentityResolver } from './auth/auth0-identity-resolver.js';
import { Auth0RoleStore } from './auth/auth0-role-store.js';
// --- CHOOSE ONE AUDIT STORE TO IMPORT ---
// import { SplunkAuditLogStore } from './auditing/splunk-audit-log-store.js';
// import { DatadogAuditLogStore } from './auditing/datadog-audit-log-store.js';
console.log('Starting Governed MCP Server (External Auditing)...');
// --- 1. Base Server & Default Components ---
const baseServer = new BaseServer({ name: "MyGovernedMCPServer-Audit", version: "1.0.0" }, { capabilities: { tools: {} } });
const logger = new ConsoleLogger({}, 'debug');
// const auditStore = new ConsoleAuditLogStore(); // <-- REMOVE or COMMENT OUT
// --- 2. Configure and Instantiate Auth0 & Audit Components ---
const AUTH0_DOMAIN = process.env.AUTH0_DOMAIN || 'YOUR_AUTH0_DOMAIN'; // Load from env or replace
const API_AUDIENCE = process.env.AUTH0_API_AUDIENCE || 'YOUR_API_AUDIENCE'; // Load from env or replace
const AUTH0_ROLES_CLAIM = process.env.AUTH0_ROLES_CLAIM || 'https://myapp.example.com/roles'; // Load from env or replace
const identityResolver = new Auth0IdentityResolver({ /* ... config ... */ auth0Domain: AUTH0_DOMAIN, apiAudience: API_AUDIENCE, logger: logger }); // Added back config
const roleStore = new Auth0RoleStore({ /* ... config ... */ rolesClaim: AUTH0_ROLES_CLAIM, logger: logger }); // Added back config
// --- Instantiate YOUR CHOSEN Audit Log Store ---
// Option A: Splunk
/*
const SPLUNK_URL = process.env.SPLUNK_HEC_URL || 'YOUR_SPLUNK_HEC_URL';
const SPLUNK_TOKEN = process.env.SPLUNK_HEC_TOKEN || 'YOUR_SPLUNK_HEC_TOKEN';
const auditStore = new SplunkAuditLogStore({ hecUrl: SPLUNK_URL, hecToken: SPLUNK_TOKEN, logger: logger, source: 'mcp-app' });
*/
// Option B: Datadog
/*
const DATADOG_API_KEY = process.env.DATADOG_API_KEY || 'YOUR_DATADOG_API_KEY';
const DATADOG_SITE = process.env.DATADOG_SITE || 'datadoghq.com'; // Or 'datadoghq.eu', etc.
const auditStore = new DatadogAuditLogStore({ apiKey: DATADOG_API_KEY, site: DATADOG_SITE, logger: logger, service: 'my-mcp-service' });
*/
// --- Fallback if neither is chosen (comment out if using Splunk/Datadog) ---
const auditStore = new ConsoleAuditLogStore(); // Keep console if not integrating yet
if (!(auditStore instanceof ConsoleAuditLogStore)) { // Check if we instantiated Splunk/DD
logger.info(`Configured external AuditLogStore: ${auditStore.constructor.name}`);
} else {
logger.warn(`Using ConsoleAuditLogStore. Configure Splunk or Datadog for production auditing.`);
}
// --- End Audit Store Instantiation ---
// --- 3. GovernedServer Configuration ---
const governedServerOptions: GovernedServerOptions = {
logger: logger,
auditStore: auditStore, // <-- Use the instantiated store
identityResolver: identityResolver,
roleStore: roleStore,
permissionStore: testPermissionStore, // Keep test permissions
enableRbac: true,
auditDeniedRequests: true,
serviceIdentifier: "governed-app-external-audit",
// Ensure sanitizeForAudit is configured, especially if sending to external system
// sanitizeForAudit: myCustomSanitizer, // Consider adding a custom one
};
// --- 4. Create GovernedServer instance ---
const governedServer = new GovernedServer(baseServer, governedServerOptions);
// ... rest of the file (schemas, handlers, connect, shutdown) remains the same ...
// --- 5. Define Tool Schemas (testUserId removed) ---
const helloToolSchema = z.object({ /* ... */ jsonrpc: z.literal("2.0"), id: z.union([z.string(), z.number()]), method: z.literal('tools/callHello'), params: z.object({ arguments: z.object({ greeting: z.string().optional().default('Hello') }).optional().default({ greeting: 'Hello' }), _meta: z.any().optional() }) }); // Added back
const sensitiveToolSchema = z.object({ /* ... */ jsonrpc: z.literal("2.0"), id: z.union([z.string(), z.number()]), method: z.literal('tools/callSensitive'), params: z.object({ arguments: z.any().optional(), _meta: z.any().optional() }) }); // Added back
// --- 6. Register Handlers (testUserId removed) ---
governedServer.setRequestHandler(helloToolSchema, async (request, extra: GovernedRequestHandlerExtra) => { /* ... */ const scopedLogger = extra.logger || logger; const identityId = typeof extra.identity === 'string' ? extra.identity : extra.identity?.id; scopedLogger.info(`[Handler] Executing callHello for identity: ${identityId || 'anonymous'} with roles: ${JSON.stringify(extra.roles)}. EventID: ${extra.eventId}`); const greeting = request.params?.arguments?.greeting || 'DefaultGreeting'; const responseText = `${greeting} ${identityId || 'World'} from governed server!`; return { content: [{ type: 'text', text: responseText }] }; }); // Added back
governedServer.setRequestHandler(sensitiveToolSchema, async (request, extra: GovernedRequestHandlerExtra) => { /* ... */ const identityId = typeof extra.identity === 'string' ? extra.identity : extra.identity?.id; const scopedLogger = extra.logger || logger; scopedLogger.info(`[Handler] Executing callSensitive for identity: ${identityId}`, { roles: extra.roles }); return { content: [{ type: 'text', text: `Sensitive data accessed by ${identityId}` }] }; }); // Added back
logger.info('Handlers registered.');
// --- 7. Connect and Shutdown ---
const transport = new StdioServerTransport();
async function startServer() { /* ... */ try { await governedServer.connect(transport); logger.info("Governed MCP server (External Auditing) started."); } catch (error) { logger.error("Failed to start server", error as Error); process.exit(1); } } // Added back
const shutdown = async () => { /* ... */ logger.info("Shutting down..."); try { await governedServer.close(); logger.info("Shutdown complete."); process.exit(0); } catch (err) { logger.error("Error during shutdown:", err); process.exit(1); } }; // Added back
process.on('SIGINT', shutdown); process.on('SIGTERM', shutdown);
startServer();
- Uncomment the code block for the platform you chose (Splunk or Datadog).
- Comment out the other platform’s block and the fallback
ConsoleAuditLogStore
instantiation. - Replace placeholders (
YOUR_SPLUNK_...
,YOUR_DATADOG_...
) with your actual credentials and endpoints. Strongly recommended: Use environment variables (process.env.VAR_NAME
) to load secrets.
Step 11: Testing
- Set Environment Variables: If using environment variables, make sure they are set before running the server (e.g., using a
.env
file anddotenv
package, or exporting them in your shell). - Rebuild:
npm run build
- Run:
npm run start
- Obtain Auth0 Token: Get a valid Access Token.
- Send Requests (using
curl
or similar):- Send a request that should succeed (e.g.,
tools/callHello
with a token for a user with appropriate roles). - Send a request that should be denied by RBAC (e.g.,
tools/callSensitive
with a token for a user without the ‘admin’ role claim). - Send a request with an invalid/expired Auth0 token.
- Send a request that should succeed (e.g.,
- Check Splunk/Datadog:
- Navigate to your Splunk Search & Reporting app or your Datadog Logs Explorer.
- Search for events/logs matching your configured
source
,sourcetype
,index
(Splunk) orsource
,service
(Datadog). You might search for theeventId
ormcp.method
. - Verify that you see JSON logs corresponding to your requests.
- Examine the log content: ensure fields like
eventId
,identity
(with Auth0 sub),mcp
,transport
, andoutcome
(includingstatus
anderror
details for failures/denials) are present and correct.
Audit logs corresponding to MCP requests (and their outcomes) should appear in your chosen Splunk or Datadog platform, containing the structured AuditRecord
data. Console output for audit logs should stop (unless you kept the ConsoleAuditLogStore
fallback).
Final Code Structure
Your src
directory might now look like:
└── src/
├── auth/
│ ├── auth0-identity-resolver.ts
│ └── auth0-role-store.ts (Optional)
├── auditing/
│ ├── splunk-audit-log-store.ts (If using Splunk)
│ └── datadog-audit-log-store.ts (If using Datadog)
└── governed-app.ts (Imports from ./auth and ./auditing)
Next Steps & Production Considerations
You’ve successfully integrated external audit logging!
- Error Handling: Add more robust error handling within the
AuditLogStore
(e.g., retry logic, dead-letter queue for failed sends). - Batching: For high-volume servers, implement log batching in your
AuditLogStore
to reduce the number of HTTP requests. - Asynchronous Sending: Ensure the
axios.post
call doesn’t block the main request thread significantly (it’s alreadyasync
, but be mindful of performance). - Configuration: Load API keys/tokens/URLs securely from environment variables or a configuration service, never hardcode them.
- Sanitization (
sanitizeForAudit
): Critically review thedefaultSanitizeForAudit
or implement your own. Ensure no sensitive data (PII, secrets from claims, etc.) is sent to your logging platform unless explicitly intended and allowed. This is crucial for compliance and security. - Platform Indexing: You might want to adjust the log format slightly or configure indexing/parsing rules in Splunk/Datadog to make searching specific fields (like
eventId
,identity.id
,outcome.status
) easier.
You now have essential identity, RBAC, and auditing foundations in place, integrated with external systems. You can continue building by replacing the InMemoryPermissionStore
or adding CredentialResolver
when needed.