Objective 3Sixty Installation Guide
(Updated September 9th, 2025)
The following steps will walk you through a typical installation of the current version of 3Sixty on premise (locally). For previous version requirements please see the Compatibility Matrix
To install 3Sixty Docker use the following instructions. 3Sixty Local Docker Development.
Platform Requirements
3Sixty can be deployed on both Linux and Windows hosts using Docker. Since the deployment is containerized, the primary requirement is that the host system supports recent versions of Docker and Docker Compose.
Supported Operating Systems
Linux
-
Ubuntu 20.04 LTS or later
-
Debian 11 or later
-
CentOS 8 / Rocky Linux 8 or later
-
Other modern Linux distributions should work if they support Docker ≥ 20.10.
Windows
-
Windows 10 Pro/Enterprise (with WSL2 backend enabled)
-
Windows 11 Pro/Enterprise (with WSL2 backend enabled)
Note:
Windows Home editions are not recommended, as Docker Desktop with WSL2 is required for reliable operation.
Windows Server is not officially supported as a native Docker host.
Microsoft dropped support for the classic “Docker on Windows Server” (using Windows Server Containers + Hyper-V isolation) years ago. The reliable way to run Docker now is via WSL2, which is only available on Windows 10/11 Pro/Enterprise, not on Windows Server.
Sizing Guidance
3Sixty is deployed as a set of Docker containers (Admin, Discovery, MongoDB, OpenSearch or Elasticsearch, RabbitMQ, SCIM, Nginx proxy, plus optional Ollama/oi-rag). Exact footprint varies with enabled components and data volume.
Quick Tiers
1. Evaluation / Single-user Demo (laptop or small VM)
CPU: 4 vCPU
RAM: 12–16 GB
-
If running OpenSearch/Elasticsearch, allocate at least 8 GB total RAM; if also enabling Ollama, prefer 16 GB
Disk: 60–100 GB SSD
-
15–30 GB for Docker images + logs
-
10–30 GB for MongoDB data (small sample sets)
-
10–30 GB for OpenSearch/Elasticsearch indices (light test content)
Note:
Run either OpenSearch or Elasticsearch, not both, to keep memory low.
Keep Ollama models small (e.g., 3B–7B class) or disable it during pure 3Sixty evaluation.
2. Small Non-Production
CPU: 8 vCPU
RAM: 24–32 GB
-
Allocate 8–12 GB combined heap for the search engine (see “Per-service guidance”)
Disk: 200–300 GB SSD (NVMe preferred)
-
MongoDB: 50–150 GB (depends on ingested docs & metadata)
-
OpenSearch/Elasticsearch: 100–150 GB (indices + replicas, if enabled)
-
Headroom for growth & backups: 50+ GB
Note:
Use one search engine.
Keep RabbitMQ on same host.
3. Production Starter (single host, low–moderate load)
CPU: 16 vCPU
RAM: 48–64 GB
Disk: 500 GB–1 TB NVMe SSD
-
MongoDB: 200–400 GB (depends on retention policy)
-
OpenSearch/Elasticsearch: 300–600 GB (depends on index size & replicas)
-
Docker/Logs/OS: 50–100 GB
Note:
For resilience, plan a path to split MongoDB and OpenSearch/Elasticsearch to dedicated hosts or managed services as usage grows.
4. Production (scaling & HA path)
Depending on customer business requirements and workload (S, M, L, XL), we will provide customized configurations to suit customer needs.
Per-service Guidance (rules of thumb)
OpenSearch / Elasticsearch
-
Heap ≈ 50% of container RAM, cap heap at ≤ 32 GB to keep Compressed Oops effective.
-
Minimum: 4–8 GB heap for pilots; 8–16+ GB heap for production.
-
SSD/NVMe strongly recommended; search performance is IOPS-sensitive.
MongoDB
-
WiredTiger cache uses ~50% of available RAM in its container/host.
-
Give it 8–16 GB for pilots; 32–64+ GB for production nodes handling large working sets.
-
SSD/NVMe required; ensure frequent backups and oplog sized for your RPO.
RabbitMQ
-
Light CPU/memory footprint compared to DB/search: 1–2 vCPU, 1–2 GB RAM is typically fine for pilots; 2–4 vCPU & 4–8 GB for heavier workflows.
Admin / Discovery web apps
-
Usually modest: budget 1–2 vCPU and 1–2 GB RAM each to start; scale with users.
Nginx proxy / TLS
-
Very small footprint; 1 vCPU, 512 MB–1 GB RAM is typically enough.
Ollama + oi-rag (optional)
-
CPU-only inference: add 4–8 vCPU & 8–16 GB RAM if running 3B–7B models.
-
GPU (optional): if you plan >7B models or higher throughput, deploy a separate GPU host.
Storage & Performance Tips
-
Always use SSD/NVMe for MongoDB and Search data volumes. Rotational disks will bottleneck indexing and queries.
-
Filesystem reservations: leave 20–30% free space on DB & search volumes to avoid performance cliffs.
-
Backups: plan snapshot/restore for MongoDB and Search; test restores regularly.
Configuring a DNS Address for 3Sixty
-
Choose a DNS Name
Work with your IT/network team to reserve a hostname within your corporate DNS zone, for example:
-
3sixty.mycompany.com
-
-
Point DNS to Your Host
Update your DNS records so that the hostname resolves to the server running 3Sixty:
-
On-premises deployments: Create an A record pointing to the static IP of the server.
-
Cloud deployments: Create a DNS record pointing to the load balancer or public IP.
-
Update Nginx TLS Certificates
3Sixty routes all traffic through the bundled Nginx reverse proxy. Certificates are loaded from ./nginx/certs/tls.crt and ./nginx/certs/tls.key.
-
For production, use a certificate issued for your chosen DNS name (via Let’s Encrypt or your Enterprise CA).
-
Replace the default localhost self-signed certificate with the DNS-matched certificate and key.
Example:
-
nginx/certs/tls.crt # Certificate for 3sixty.mycompany.com
-
nginx/certs/tls.key # Matching private key
After updating the files, restart Nginx via Docker:
-
docker compose restart nginx-proxy
-
-
Update OAuth2 / SSO settings
If you are integrating with Microsoft Entra ID (Azure AD) or another identity provider, update the redirect URLs to match your DNS name. For example:
https://3sixty.mycompany.com/3sixty-admin/oauth2/callback
https://3sixty.mycompany.com/3sixty-discovery/oauth2/callback
-
Test External Connectivity
Verify that:
-
The DNS name resolves correctly from inside and outside your corporate network.
-
A browser connection to https://3sixty.mycompany.com/3sixty-admin/ loads the Admin UI.
-
The certificate is valid and trusted (no browser warnings).
-
Example (BIND-style zone file):
3sixty IN A 203.0.113.42
Configuring SSL Certificates for 3Sixty
All 3Sixty deployments must be secured with HTTPS. The bundled Nginx reverse proxy handles TLS termination, so you’ll need to configure a valid SSL certificate and private key for your chosen DNS address (e.g. 3sixty.mycompany.com).
-
Obtain a Certificate
You can use any of the following sources:
-
Public CA: Use Let’s Encrypt or a commercial CA (e.g. DigiCert, Entrust).
-
Enterprise CA: If your organisation has an internal PKI, request a certificate for your chosen DNS name.
-
Wildcard Certificate: A wildcard like *.mycompany.com can cover multiple 3Sixty subdomains.
-
Self-signed Certificate (not recommended for production): For evaluation or lab setups only.
The certificate must include:
-
Common Name (CN) or Subject Alternative Name (SAN): Your 3Sixty DNS address (e.g. 3sixty.mycompany.com).
-
Private Key: Matching RSA/ECDSA key.
-
Intermediate Certificates: If provided by your CA, include them in the .crt bundle.
-
-
Place the Certificate in Nginx
Certificates are loaded by Nginx from the nginx/certs folder. Copy your files into that directory:
nginx/certs/tls.crt # Your certificate (may include intermediates)
nginx/certs/tls.key # Your private key
Ensure file permissions restrict access to the private key (readable only by the account running Docker).
-
Restart the Proxy
After placing the new files, restart the Nginx container:
docker compose restart nginx-proxy
Nginx will now serve HTTPS traffic using your certificate.
-
Verify the Certificate
Open https://3sixty.mycompany.com/3sixty-admin/ in a browser.
Confirm the padlock icon shows the certificate as valid.
Use tools like openssl or curl for extra verification:
openssl s_client -connect 3sixty.mycompany.com:443 -showcerts
-
Renewal & Maintenance
Let’s Encrypt: Certificates expire every 90 days. Automate renewal using Certbot or your organisation’s standard tool.
Enterprise/Commercial CAs: Track expiry dates and replace certificates ahead of time.
Container Redeploy: If you rotate certificates, update the nginx/certs folder and restart nginx-proxy.
Best practice for production:
-
Always use a certificate signed by a trusted CA (public or enterprise).
-
Avoid self-signed certificates in customer/enterprise environments.
-
Consider automating renewals (Let’s Encrypt + DNS challenge, or integration with your enterprise certificate manager).
-
Security Hardening Considerations
-
Authentication & Access Control
-
Single Sign-On (SSO):
-
Configure OAuth2/OpenID Connect with Microsoft Entra ID (Azure AD) or your corporate identity provider.
-
Update .env.discovery with your CLIENT_ID, TENANT_ID, and CLIENT_SECRET.
-
-
SCIM User Provisioning:
-
Use SCIM to automate user lifecycle management from your IdP.
-
Requires a publicly accessible, TLS-secured SCIM endpoint.
-
-
Local Accounts:
-
Disable or restrict any default/local accounts once SSO is in place.
-
-
Least Privilege:
-
Grant roles in 3Sixty based on job responsibilities (e.g. admin).
-
Review access regularly.
-
-
-
Network & Ports
-
External exposure:
-
Only expose required ports externally:
-
443 (HTTPS via Nginx reverse proxy)
-
Optional: SCIM endpoint if integrated with Entra ID/MS Copilot
-
-
Block direct external access to backend services (MongoDB, OpenSearch/Elasticsearch, RabbitMQ).
-
-
Internal networking:
-
Keep all service-to-service traffic on the internal Docker network (threesixty by default).
-
-
Firewalling:
-
Restrict inbound connections at the host or network firewall level.
-
Allow outbound traffic only where required (e.g. to AWS ECR, IdP endpoints, tunneling services if used).
-
-
-
Certificates & TLS
-
Production Certificate:
-
Always use a trusted CA-signed certificate for your 3Sixty DNS address.
-
Replace self-signed localhost certificates before going live.
-
-
TLS Enforcement:
-
Redirect port 80 → 443 to enforce HTTPS.
-
Disable weak TLS versions/ciphers in the Nginx config.
-
-
Downstream Trust:
-
If 3Sixty integrates with systems using private/self-signed certs, import them into the JVM truststore of affected services (/opt/certs/*.cer mounted in containers).
-
-
Renewals:
-
Automate certificate rotation (e.g. Certbot, enterprise certificate manager).
-
-
-
Container & Host Security
-
Image Provenance:
-
Pull images only from Objective’s private AWS ECR registry.
-
-
Secrets Management:
-
Never commit .env.* files to Git.
-
Store secrets (tokens, client secrets) in a secure vault or secret manager.
-
Rotate secrets regularly.
-
-
OS Patching:
-
Keep the Docker host OS up to date with vendor security patches.
-
-
Docker Runtime Security:
-
Run containers as non-root where possible.
-
Apply resource limits (cpu, memory) to reduce risk of DoS.
-
Enable audit logging for container events.
-
-
-
Logging, Monitoring & Auditing
-
Application Logs:
-
Centralise 3Sixty logs in your SIEM or log aggregation platform.
-
Configure log rotation to avoid filling disks.
-
-
Database & Search:
-
Monitor MongoDB and OpenSearch/Elasticsearch for failed login attempts, replication lag, and index health.
-
-
Certificates:
-
Monitor expiry dates and renew proactively.
-
-
Access Audits:
-
Periodically review who has access to Admin/Discovery, SCIM endpoints, and container hosts.
-
-
-
Deployment Best Practices
-
DNS + SSO Alignment:
-
Ensure your DNS hostname (e.g. 3sixty.mycompany.com) is consistent across Nginx, certificates, and IdP redirect URIs.
-
-
Public URL Exposure:
-
When testing, use Ngrok/Cloudflare Tunnel.
-
For production, always deploy behind a corporate reverse proxy/load balancer with DNS mapping.
-
-
Backups:
-
Implement regular backups of MongoDB, OpenSearch indices, and configuration files.
-
-
High Availability:
-
For production, plan to externalize MongoDB and OpenSearch into managed clusters or separate hosts.
-
-
Logging & File System Considerations
3Sixty runs as multiple containers (Admin, Discovery, Nginx proxy, MongoDB, OpenSearch/Elasticsearch, RabbitMQ, SCIM, optional Ollama/oi-rag). For reliability and observability, prefer stateless containers (logs to stdout/stderr) and state in named volumes.
Where logs go by default
Admin & Discovery (Java, Spring Framework on Amazon Corretto)
-
Recommended: log to stdout/stderr and let Docker collect logs.
-
Configure Log4j2 to ConsoleAppender only (no file appender inside the container).
Nginx
-
Recommended to log access/error to stdout/stderr (common in container images). If your image writes to files (e.g., /var/log/nginx/*.log), either:
-
switch to stdout/stderr in the nginx config, or
-
mount /var/log/nginx and ship those files.
MongoDB
-
Writes to stdout by default in many container images. If configured for file logs, mount /var/log/mongodb.
-
Data lives in /data/db → must be a named volume (see §2).
OpenSearch / Elasticsearch
-
Logs typically to stdout. If file logging is enabled, mount /usr/share/opensearch/logs or /usr/share/elasticsearch/logs.
-
Data lives in /usr/share/opensearch/data or /usr/share/elasticsearch/data → must be a named volume.
RabbitMQ
-
Logs go to stdout by default in the official image. If file logging is enabled, mount /var/log/rabbitmq.
-
Data lives in /var/lib/rabbitmq → must be a named volume.
Persisting Data with Named Volumes
Create named volumes for stateful services so container recreation doesn’t lose data:
volumes:
mongo_data:
os_data: # or: es_data
rabbitmq_data:
Bind them in services:
services:
mongodb:
volumes:
- mongo_data:/data/db
opensearch:
volumes:
- os_data:/usr/share/opensearch/data
rabbitmq:
volumes:
- rabbitmq_data:/var/lib/rabbitmq
Do not store application state in container layers. Always use volumes for DB/search/broker data.
Quick Reference: Common Paths
Component |
Logs (stdout default) |
Data (must persist) |
---|---|---|
Admin |
stdout (or /var/log/app/* if enabled) |
N/A |
Discovery |
stdout (or /var/log/app/* if enabled) |
N/A |
Nginx |
stdout/stderr (or /var/log/nginx/*) |
N/A |
MongoDB |
stdout (or /var/log/mongodb/*) |
/data/db |
OpenSearch/ES |
stdout (or /usr/share/logs/) |
/usr/share/opensearch/data or ES /data path |
RabbitMQ |
stdout (or /var/log/rabbitmq/*) |
/var/lib/rabbitmq |
SCIM |
stdout |
N/A |
Ollama/oi-rag |
stdout (optional GC/file logs if enabled) |
Model cache (varies by image; mount if needed) |
Recommended Defaults (copy/paste)
-
Java services (Admin/Discovery): console logging only; enable Docker log rotation; optional GC to file with mounted /var/log/app.
-
Stateful services: always named volumes for data; avoid bind mounting random host paths.
-
Centralization: prefer Docker logging driver to your SIEM; use sidecars only if file logs are mandated.
Upgrade an Existing Docker Installation
“Persists” vs “Replaces”
Persisted (keep): data volumes for MongoDB, OpenSearch/Elasticsearch, RabbitMQ; any bind-mounted folders (e.g., TLS certs); your .env.* files.
Replaced (upgrade): container images (Admin, Discovery, Nginx proxy, SCIM, etc.) and their containers.
Tip: Always use named volumes for DB/search/broker data so upgrades don’t touch your data.
-
Pre-flight (5–10 min)
Pin image tags in your docker-compose.yaml to specific versions (avoid :latest). This gives you deterministic upgrades/rollbacks.
Maintenance window: announce downtime (Compose on a single host isn’t HA).
Check free disk space: at least 20–30% free on volumes & Docker data dir.
Export current config:
-
docker compose config > compose.effective.yml
-
cp .env* backups/env-$(date +%F)/
Back up data (recommended):
-
MongoDB (logical dump):
-
docker exec -t mongodb mongodump --archive=/dump.archive
-
docker cp mongodb:/dump.archive ./backups/mongo-$(date +%F).archive
-
-
OpenSearch/Elasticsearch (snapshot repo): ensure you’ve configured a snapshot repo; take a snapshot named with today’s date.
-
RabbitMQ (definitions, optional):
-
docker exec rabbitmq rabbitmqctl export_definitions /tmp/defs.json
-
docker cp rabbitmq:/tmp/defs.json ./backups/rabbitmq-defs-$(date +%F).json
If you can’t snapshot, at least stop the stack (next step) and take a filesystem-level copy of the volume paths.
-
-
-
Stop the stack (brief downtime)
docker compose down
down stops containers but keeps named volumes (your data). If you also need to prune dangling images later, use docker image prune separately—do not remove volumes.
-
Pull the new images
Update the image tags in docker-compose.yaml (or accept updated tags if you already pinned new ones), then:
docker compose pull
-
Apply any config changes
Update .env.* files for new settings (keep secrets safe).
Replace TLS certs if needed (e.g., nginx/certs/tls.crt and tls.key).
If the release notes mention breaking config changes, make them now.
-
Start the upgraded stack
docker compose up -d
Verify:
docker compose ps
docker compose logs -f --tail=200
-
Admin UI loads (/3sixty-admin/)
-
Discovery UI loads (/3sixty-discovery/)
-
Search, MongoDB, RabbitMQ report healthy in logs
-
SSO/SCIM callbacks succeed (if configured)
-
-
Post-upgrade checks (smoke tests)
-
Login via SSO; open a few documents; run a search; perform an admin task.
-
Check index and database health (OpenSearch/ES green/yellow, Mongo primary OK).
-
Confirm certificates present and valid (no browser warnings).
-
If you changed .env.discovery OAuth values, ensure redirects match your DNS.
-
-
Rollback (if something breaks)
Because you pinned tags:
-
Stop
docker compose down
-
Revert image tags in docker-compose.yaml to the previous known-good versions
-
Start old set
docker compose up -d
Your volumes still contain pre-upgrade data. If you performed data migrations that are not backward compatible, restore from the backups/snapshots you took in step 1.
-
Backups in a 3Sixty Docker Deployment
-
What needs to be backed up
Component
What to back up
Location (default)
MongoDB
Database contents
Docker volume mongo_data → /data/db
OpenSearch / ES
Index data + snapshots
Docker volume os_data/es_data → /usr/share/opensearch/data
RabbitMQ
Queues & configuration (optional)
Docker volume rabbitmq_data → /var/lib/rabbitmq
TLS Certificates
TLS cert & key used by Nginx
nginx/certs/tls.crt and tls.key
Environment files
Deployment config, secrets, OAuth/SCIM settings
.env.* files at project root
Docker config
docker-compose.yaml + overrides
Versioned in Git or config repo
-
How to back up (optional steps)
MongoDB
-
Logical backup (portable):
-
docker exec -t mongodb mongodump --archive=/dump.archive
-
docker cp mongodb:/dump.archive ./backups/mongo-$(date +%F).archive
-
-
Restore:
-
docker exec -i mongodb mongorestore --archive=/dump.archive --drop
-
OpenSearch / Elasticsearch
-
Preferred: use the built-in snapshot API to a mounted repository (e.g. S3 bucket, NFS share).
-
Example:
-
PUT _snapshot/daily_backup { "type": "fs", "settings":
{ "location": "/snapshots" } }
-
PUT _snapshot/daily_backup/snap-2025-08-21?wait_for_completion=true
Mount /snapshots in your container via docker-compose.yaml.
-
RabbitMQ
-
Export configuration (users, vhosts, policies, queues):
-
docker exec rabbitmq rabbitmqctl export_definitions /tmp/defs.json
-
docker cp rabbitmq:/tmp/defs.json ./backups/rabbitmq-defs-$(date +%F).json
-
-
Data volume backup (for queue state): snapshot rabbitmq_data volume.
Certificates & Config
-
Copy TLS files and .env.* files into your backup system. Example:
-
tar czf ./backups/config-$(date +%F).tar.gz nginx/certs .env.* docker-compose.yaml
-
-
-
When to back up
Nightly (optional):
-
MongoDB dump
-
OpenSearch/Elasticsearch snapshot
-
RabbitMQ definitions
Before upgrades: take a full backup (databases + configs) prior to pulling new images.
Before major changes: e.g. changing OAuth/SCIM config, rotating certificates.
Weekly/Monthly full backup: include TLS certs, .env.*, compose files, logs if required by compliance.
-
-
Where to store backups
Off-host: do not keep backups only on the same Docker host. Push them to:
-
Enterprise backup solution
-
Cloud storage (AWS S3, Azure Blob, etc.)
-
Encrypted NAS/NFS
Retention policy: align with your organisation’s data protection rules (e.g., 30/90/365 days).
-
-
Recommended Practices
-
Automate: wrap the above steps in a cron job or CI/CD pipeline (e.g., GitLab scheduled jobs).
-
Test restores: periodically restore into a staging environment to verify backup integrity.
-
Secure backups:
-
Encrypt archives at rest
-
Control access to secrets in .env.* and private keys
-
-
Volume-level backups: for faster disaster recovery, snapshot entire volumes using LVM/ZFS or your cloud provider’s block storage snapshots.
-
Other Maintenance Considerations
-
Monitoring & Health Checks
-
Container health:
-
Use docker compose ps or monitoring tools to ensure all containers are “healthy.”
-
Consider enabling Docker healthchecks in your docker-compose.yaml.
-
We need to implement health/liveness checks in services
-
-
Application monitoring:
-
Watch MongoDB replication lag (if replica set).
-
Monitor OpenSearch/Elasticsearch cluster state (green/yellow/red).
-
Track RabbitMQ queue depth and connection counts.
-
-
System metrics:
-
CPU, memory, disk space, and I/O throughput on the Docker host.
-
Raise alerts when free disk space drops below 20%.
-
-
-
Log Management
-
Log rotation: enable Docker json-file log driver rotation (max-size, max-file) or ship logs to a SIEM.
-
Retention: configure retention in central logging system per compliance needs (30–180 days typical).
-
Auditing: review logs regularly for authentication failures, certificate errors, or service crashes.
-
-
Data Management
-
Backups: follow daily/weekly strategy (MongoDB dumps, OpenSearch snapshots, RabbitMQ definitions, .env.* configs).
-
Test restores: regularly validate backups by restoring to staging.
-
Index retention (OpenSearch/ES): configure index lifecycle management (ILM) to delete or archive old indices.
-
-
Security Lifecycle
-
Patch cycles:
-
Keep host OS patched monthly.
-
Pull updated Docker images when released.
-
-
Certificates:
-
Rotate TLS certificates before expiry (ideally automated).
-
Replace self-signed certs with CA-signed in production.
-
-
Secrets: rotate .env.* secrets, OAuth client secrets, and tokens periodically.
-
User review: audit accounts in your IdP/SCIM to enforce least privilege.
-
-
Capacity Planning
-
Disk growth:
-
Monitor MongoDB and OpenSearch volumes. Growth is proportional to number/size of ingested documents and retention period.
-
-
Memory/CPU:
-
Check that JVM heaps (Mongo, OpenSearch, Admin/Discovery services) are not frequently hitting GC pressure.
-
-
Scaling path:
-
Plan to split DB and Search into dedicated hosts or managed services as workload increases.
-
-
-
Upgrade Practices
-
Schedule maintenance windows for upgrades (compose down → pull → up).
-
Read release notes for breaking config changes.
-
Rollback plan: keep old image tags pinned in compose for quick rollback.
-
-
Disaster Recovery
-
Documented procedure: who does what in case of host failure.
-
Cold restore drill: restore onto a new host at least annually to validate DR readiness.
-
Snapshot strategy: use cloud block storage snapshots or VM snapshots for faster recovery in addition to logical backups.
-
-
Compliance & Audits
-
Access review: ensure Admin/Discovery access is aligned with HR roles.
-
Log review: periodically review logs for anomalies (especially auth).
-
Change control: track updates to docker-compose.yaml, .env.*, and certs in version control.
-
FAQs
-
When configuring 3Sixty the docs all suggest using the 3Sixty Admin UI - is there a way to configure Objective 3Sixty through code?
-
Objective 3Sixty jobs can be created and configured using API calls. We provide a comprehensive set of admin APIs to accomplish that.
-
Samples : https://helpdocs.objective.com/3sixty_user/Content/developers/api/v2/api-objects.htm
-
We also support interactive documentation which will be available, once your 3Sixty system is up and running at [<<serverURL>>/3sixty-admin/swagger-ui/index.html]
-
-
The plan is to install 3Sixty on a VM and run redaction tasks - what specs do you recommend?
Sizing recommendations will be provided in the design document based on
-
Throughput & volumes of documents
-
Avg/Max document size
-
Other requirements like OCR, document conversion, etc.
-
-
What would admins/users need to do to log in to an Objective 3Sixty instance, using IP address or localhost?
-
Suggest using a DNS alias & certificate – not localhost or IP address. This will be covered in the technical design document.
-
Docker deployment info can be found in the standard help files, although is out of date – an update has been requested from Product.
-
Authentication can be:
-
Basic Username/Password – not recommended
-
LDAP
-
Microsoft Entra ID
See: https://helpdocs.objective.com/3sixty_tech/Content/administration/user-management.htm
-
-
-
When using Objective Redact with 3Sixty, the documentation mentions the redaction task (https://helpdocs.objective.com/3sixty_user/Content/tasks/redactor.htm), but there is also an OpenAI chat completion task for redaction. Which task do you recommend for redaction?
-
The 3Sixty Redaction tasks are not required.
-
3Sixty will create a target file for document redaction via the OpenAI Chat task.
-
The target file will contain all redaction terms found by the OpenAI Chat task based on the prompt provided. Note this may require several rounds of refinement to get the desired output.
-
Redaction is then completed via Objective Redact not 3Sixty. This is a manual task which involves loading the source document and target redaction file into Redact and confirming all required terms have been located.
-
-
To use AI redaction, does an AI embeddings task need to be used before the redaction task can be used?
-
AI Embeddings tasks are not needed.
-
-
For AI redaction, is there a way to use a locally-installed LLM (such as Llama)? Or to point to an LLM hosted elsewhere (such as Azure Open AI)?
-
You can use any OpenAI compliant LLM, either installed locally, or cloud hosted.
-
3Sixty only requires an endpoint and an API key.
-
Note: Azure Open AI is a slight exception as it differs from other Open Source OpenAI models and would need a small product uplift to work.
-
LLM options:
-
Cloud Based: can be an option if passes security considerations
-
On-Premise: If compliance or security considerations mean data cannot leave the environment, we suggest using an open-source GPT model deployed locally
-
Recommended model is GPT OSS https://openai.com/index/introducing-gpt-oss/
-
This can be deployed via a GPU-supported Docker-based setup (e.g., Ollama - https://hub.docker.com/r/ollama/ollama , allowing full control over data and processing.
-
Ollama already supports gpt-oss - https://ollama.com/library/gpt-oss
-
Running this model on-premise would require approx. 100GB of RAM for the GPU
-
-
-
The plan is to perform redaction of PDF documents. Do you recommend that PII Detection (https://helpdocs.objective.com/3sixty_user/Content/tasks/pii-detection.htm) or Trivial Detection (https://helpdocs.objective.com/3sixty_user/Content/tasks/trivial.htm) are also used in addition to the redaction task? Or would the redaction task be enough by itself?
-
The OpenAI chat task can be used to find PII as well as other suitable text for redaction – there should be no requirement to also use the 3Sixty PII Task, which is regex based.
-
The 3Sixty Trivial task filters documents out based on age or type – it can optionally be added to the jobs if useful but is not required.
-
Non-AI PII detection is supported in Objective Redact directly - so you can apply this in addition to the AI-generate target file if useful.
-
The output of the OpenAI Chat task is a Redact Target file containing all the text to be redacted from that particular document.
-
The document and the target file can then be loaded into Objective Redact for manual confirmation and completion.
-
Related Articles: