← Back to blog

UniFi Protect Retention Planning Playbook for Homes and Small Offices

HID Consulting

Most retention problems in UniFi Protect systems are not hardware problems. They are policy problems. Teams buy an NVR, connect cameras, and leave default settings in place. Months later, they discover the footage they actually needed has already rolled off disk.

Start with incident timelines, not terabytes

Before calculating storage, define the questions your footage must answer:

  1. How far back must critical evidence exist? (7, 14, 30, 90 days)
  2. Which zones are legally or operationally sensitive?
  3. How long does your team usually take to notice and review incidents?

For a package room, 30 days may be reasonable. For a low-risk side yard, 7-14 days might be enough. Retention targets should be different by zone, not copied globally.

Segment cameras into policy tiers

We use three policy tiers on most projects:

  • Tier A (critical): entrances, cash handling, medicine storage, loading docks
  • Tier B (important): hallways, shared spaces, parking lanes
  • Tier C (context): perimeter corners, low-risk exterior coverage

Then map recording mode:

  • Tier A: continuous + smart detections
  • Tier B: motion-indexed with tuned sensitivity
  • Tier C: smart detections only, lower bitrate

This simple model dramatically improves useful retention without replacing hardware.

Sizing formula (quick estimate)

Use this rough estimate for initial planning:

Total daily GB = sum(camera bitrate Mbps × 10.8)

Then multiply by retention days and add 20-30% overhead for spikes and operational cushion. Validate with real traffic after deployment.

Common mistakes that collapse retention

1) High bitrate everywhere

Not every zone needs max detail at all hours. Reserve high bitrate for critical scenes.

2) No scene-specific frame rates

A hallway does not need the same frame rate as a point-of-sale counter.

3) No monthly retention audit

Storage behavior drifts after camera additions or firmware changes. Review monthly.

4) No written policy

If nobody can explain why retention is set a certain way, you do not have a policy—just defaults.

What to document for reliability

Your runbook should include:

  • Camera list and tier assignment
  • Retention target by tier
  • Recording mode by camera
  • Expected days on disk (baseline)
  • Procedure for exporting and preserving clips

Without this, teams repeat investigations and lose confidence in the system.

Final recommendation

Treat retention as an operational control, not a one-time setup. If your policy is explicit, tiered, and reviewed monthly, UniFi Protect becomes far more dependable during real incidents.

Detailed sizing workflow

After initial estimates, run a seven-day observation period in production-like conditions. Capture per-camera average bitrate, motion density, and peak-hour traffic. Use these values to recalculate retention with a realistic buffer.

A practical worksheet includes:

  • camera ID and zone tier
  • target frame rate and resolution
  • average and peak bitrate
  • recording mode (continuous, motion, smart)
  • expected days retained

This sheet becomes the foundation for capacity decisions and future expansion planning.

Zone-specific quality profiles

Not every camera should run the same quality profile. Define profiles such as:

  • Investigative profile: high detail for identification-critical zones
  • Operational profile: medium detail for workflow visibility
  • Context profile: efficient settings for broad situational awareness

Map each camera to a profile and review quarterly.

Alert strategy tied to retention policy

Retention and alerting should reinforce each other. If a zone is critical enough for long retention, it likely deserves higher-priority alert routing. Conversely, low-priority zones can use summary notifications to avoid fatigue.

Export and legal readiness

Even in non-regulated environments, evidence handling should be disciplined. Standardize export naming, include event window padding before/after trigger time, and store copies in controlled repositories with access logs.

Capacity forecasting for growth

When adding cameras, estimate incremental storage impact before installation. A conservative method is to model worst-case motion density and reserve headroom. This prevents sudden retention collapse after expansion.

Monthly governance review

A lightweight monthly meeting can keep policy healthy:

  • compare expected vs actual retention days
  • review top incident retrieval times
  • evaluate false positives by zone
  • approve tuning changes and document rationale

Field checklist you can apply this week

If you want quick progress without waiting for a major redesign, run a one-week stabilization sprint. On day one, verify inventory accuracy: list every gateway, switch, AP, camera, controller, and automation hub with firmware version and owner. On day two, validate security controls: admin MFA, role separation, remote access path, and basic inter-network policy intent. On day three, review reliability controls: backup freshness, restore viability, and top five noisy alerts. On day four, execute one failure simulation relevant to your environment (WAN outage, camera failure, automation controller restart, or identity-provider disruption). On day five, close the loop with documentation updates and a short stakeholder summary.

The goal of this sprint is not perfection. It is to replace assumptions with tested facts. Most teams discover that their biggest risks are not unknown technologies; they are undocumented dependencies and unowned operational tasks. A one-week sprint gives you a clear remediation queue and creates momentum for deeper improvements.

When reviewing results, classify findings into three buckets: immediate fixes (high risk, low effort), planned engineering work (high impact, medium effort), and deferred optimizations (lower impact or high complexity). This triage keeps teams focused and prevents the common pattern of starting too many initiatives at once.

Related reading