Executive Summary

6
Critical Issues
5
High Priority
6
Medium Priority
5
Low / Best Practice
6
DBs Missing CHECKDB
10
DBs with Backup Issues
0
Deadlocks (7 days)
0
High-Impact Missing Indexes

RockyPC is a single-instance SQL Server 2022 Developer Edition running on a Windows 10 Home hypervisor with a diverse set of databases (CRM, DigitsSolver, SQLStorm, tpcc, tpch, tpch10, WideWorldImporters). The server has been running for only ~1 day, which limits some statistics, but several significant operational, integrity, and backup concerns are already clearly visible.

The most urgent concerns are: six user databases have never had CHECKDB run (reporting epoch date 1900-01-01), multiple databases have no full backup on record (including CRM, tpcc, tpch10, and all three system databases), and several databases in FULL recovery model have no log backups, meaning transaction logs will grow without bound. I/O performance is excellent (all latencies ≤1 ms average), wait statistics are dominated by expected background sleeps, and no deadlocks or missing indexes are detected — the server is performing well workload-wise. Configuration improvements such as enabling optimize for ad hoc workloads, enabling backup compression, enabling backup checksum, and tuning Query Store statistics collection intervals are quick wins. Memory configuration is conservative and should be reviewed given the available 32 GB of physical RAM.

⚠️ This is a Developer Edition instance on Windows 10 Home. Developer Edition has all Enterprise features but is not licensed for production use. If this server hosts production data, a production license upgrade is required.

🔥 Top Priorities at a Glance

  1. Critical #1 — Run DBCC CHECKDB on 6 Databases Immediately
    WideWorldImporters, SQLStorm, tpcc, tpch, CRM, and tpch10 show a last-run date of 1900-01-01, indicating CHECKDB has never been executed. Undetected corruption is a data-loss risk. Confidence: Very High (99%)
  2. Critical #2 — Establish Full Backup Strategy for All Databases
    CRM, tpcc, tpch10, DigitsSolver, master, model, and msdb have never been backed up. WideWorldImporters last backup was 1,284 days ago. Data loss risk is severe. Confidence: Very High (99%)
  3. Critical #3 — Fix Log Backup Gap on FULL Recovery Databases
    DigitsSolver, tpcc, tpch, CRM, tpch10, and model are in FULL recovery with no log backups. Transaction logs will grow indefinitely and point-in-time recovery is impossible. Confidence: Very High (99%)
  4. High #4 — Enable Optimize for Ad Hoc Workloads
    Currently disabled. This prevents single-use plan stubs from being cached, reducing plan cache memory pressure. Confidence: High (95%)
  5. High #5 — Enable Backup Compression and Checksum by Default
    Both are disabled. Backup compression reduces backup size and duration; backup checksum enables early corruption detection. Confidence: High (95%)

1. Performance Improvements (CPU, Memory, I/O)

  1. Medium Review Max Server Memory Setting

    Max server memory is set to 23,168 MB (~71% of 32 GB physical RAM) on a system with only 452 MB committed out of 23,168 MB target. This indicates very light current activity. The min server memory running value of 16 MB is fine. However, with 16 logical CPUs and multiple large databases (tpch, tpch10, WideWorldImporters), SQL Server may benefit from additional memory during analytical workloads. The current cap of 23,168 MB reserves ~9 GB for the OS, which is generally appropriate for Windows 10, but verify OS needs.

    • Current committed memory: 452 MB — no immediate pressure.
    • Max server memory of 23,168 MB is reasonable given 32 GB physical, but re-evaluate if workload grows.
    • Consider raising to 26,000–27,000 MB if OS stability allows.
    Confidence: Medium (70%) — low uptime limits memory profiling accuracy
  2. Low I/O Performance Is Excellent — No Action Required

    All data and log file average latencies are ≤1 ms (data files) and 3–7 ms (log reads). These are well within best-practice thresholds (<10 ms data, <5 ms log for writes). Log write latencies are effectively 0 ms indicating fast storage. The total I/O stalls are minimal given the activity levels.

    ✓ I/O subsystem is healthy. No tuning action needed at this time.
    Confidence: High (90%)
  3. Low Wait Statistics Are Benign — Background Waits Only

    Both top wait types (QDS_PERSIST_TASK_MAIN_LOOP_SLEEP and PWAIT_DIRECTLOGCONSUMER_GETNEXT) are background infrastructure sleeps, each representing ~50% of total waits — this is normal and expected behavior. These are Query Store persistence and log consumer background threads sleeping between work intervals. No user-query waits (CXPACKET, LCK_M_*, PAGEIOLATCH, SOS_SCHEDULER_YIELD) are detected.

    ✓ No actionable wait-based performance bottleneck detected.
    Confidence: High (92%)
  4. Medium Enable tempdb Metadata Memory-Optimized Tables

    SQL Server 2019+ supports memory-optimized tempdb metadata, which can significantly reduce system table contention on busy servers. Currently disabled. With 16 CPUs this feature can improve scalability if tempdb system table contention ever becomes an issue.

    -- Enable memory-optimized tempdb metadata (requires restart)
    ALTER SERVER CONFIGURATION
        SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;
    -- Verify: SERVERPROPERTY('IsTempdbMetadataMemoryOptimized')
    Confidence: Medium (65%) — low current tempdb activity, proactive benefit
  5. Medium Verify MAXDOP Configuration for Analytical Workloads

    MAXDOP is set to 8 on a 16-logical-CPU machine. This is a valid configuration for mixed workloads, but for the analytical TPC-H and TPC-CH benchmark databases (tpch, tpch10), allowing higher parallelism may improve query performance. The cost threshold for parallelism at 30 is a reasonable starting point above the default of 5.

    • Current MAXDOP: 8 (acceptable for OLTP-dominant mix)
    • For analytical databases, consider query-level MAXDOP hints or Resource Governor to allow higher DOP.
    • Cost threshold of 30 is appropriate for a mixed workload; consider 40–50 if plan cache shows many parallel plans.
    Confidence: Medium (68%)

2. Configuration Optimizations

  1. High Enable Optimize for Ad Hoc Workloads

    Currently disabled. This option causes SQL Server to store a small stub the first time an ad hoc query executes rather than the full plan, saving significant plan cache memory for single-use queries. This is a safe, always-recommended setting.

    EXEC sp_configure 'optimize for ad hoc workloads', 1;
    RECONFIGURE;
    Confidence: Very High (98%) — universally recommended best practice
  2. High Enable Backup Compression and Backup Checksum by Default

    Both backup compression default (0) and backup checksum default (0) are disabled. Enabling compression reduces backup file size and duration; enabling checksum validates data integrity during backup, allowing earlier detection of corruption.

    EXEC sp_configure 'backup compression default', 1;
    RECONFIGURE;
    
    EXEC sp_configure 'backup checksum default', 1;
    RECONFIGURE;
    Confidence: Very High (97%)
  3. Medium Enable Blocked Process Threshold Monitoring

    The blocked process threshold is set to 0 (disabled). Setting it to 5–10 seconds enables the blocked process report event in Extended Events / SQL Trace, which is invaluable for diagnosing blocking chains in OLTP workloads like tpcc.

    EXEC sp_configure 'blocked process threshold (s)', 5;
    RECONFIGURE;
    Confidence: High (88%)
  4. Medium Set Min Server Memory to a Reasonable Floor

    The configured min server memory is 0 MB (running: 16 MB). On a dedicated SQL Server, setting a minimum floor prevents the OS from aggressively trimming the buffer pool during low-activity periods, reducing plan/buffer cache churn after periods of inactivity.

    -- Set min server memory to ~2 GB for a lightly loaded dedicated server
    EXEC sp_configure 'min server memory (MB)', 2048;
    RECONFIGURE;
    Confidence: Medium (72%)
  5. Low Upgrade to Latest SQL Server 2022 CU (KB5077465 is a GDR)

    The server is running 16.0.1170.5 (RTM-GDR), which is the General Distribution Release security patch, not the latest Cumulative Update. CUs include bug fixes, stability improvements, and IQP enhancements beyond GDR patches. For SQL Server 2022, the latest CU should be applied.

    • Current: RTM-GDR (KB5077465) — security-focused patch only
    • Recommended: Apply the latest SQL Server 2022 CU for full bug fixes and performance improvements
    • Check: Microsoft SQL Server Latest Updates
    Confidence: High (90%)
  6. Low Review WideWorldImporters Compatibility Level (130 vs 160)

    WideWorldImporters is running at compatibility level 130 (SQL Server 2016), while all other user databases are at 160 (SQL Server 2022). This database misses out on Intelligent Query Processing, Parameter Sensitive Plan optimization, DOP Feedback, and other SQL 2022 engine improvements.

    -- Test thoroughly in a non-production environment first
    ALTER DATABASE WideWorldImporters
        SET COMPATIBILITY_LEVEL = 160;
    
    -- Verify Query Store captures any plan regressions after change
    Confidence: High (85%) — functional testing required before changing

3. Index Optimization Opportunities

No high-impact missing indexes detected across all databases at this time. This is expected given the short uptime (~1 day) — the missing index DMVs accumulate data over time. Re-run index analysis after at least 3–7 days of representative workload.
  1. Low Schedule Index Analysis After Representative Workload Accumulates

    With only 1 day of uptime, missing index and index usage statistics are not yet representative. Re-run a full index analysis using the Index Tuning goal in AI SQL Tuner after at least 7 days of typical workload for each database, particularly for SQLStorm (highest I/O), WideWorldImporters, tpch, and tpch10.

    • Priority databases for future index review: SQLStorm, WideWorldImporters, tpcc, CRM
    • Analytical databases (tpch, tpch10): consider columnstore indexes for aggregate query patterns (Enterprise feature available in Developer Edition)
    Confidence: High (88%)
  2. Low Consider Columnstore Indexes for TPC-H Analytical Databases

    The tpch and tpch10 databases represent analytical workloads. SQL Server 2022 Developer Edition (Enterprise engine) supports clustered and non-clustered columnstore indexes, which can provide 10–100x query acceleration for aggregation-heavy analytical queries.

    • Evaluate adding non-clustered columnstore indexes on large fact tables in tpch and tpch10
    • Use Query Store to compare plan performance before and after
    • Batch mode processing is automatically leveraged with columnstore
    Confidence: Medium (70%) — requires schema and workload review

4. Query Store Optimization

Query Store is enabled and READ_WRITE on all user databases — this is excellent. Several configuration inconsistencies exist between WideWorldImporters and the other databases.

Database Flush Interval Stats Interval Max Plans/Query Storage Used Status
CRM15 min60 min2000.10%✓ OK
DigitsSolver15 min60 min2000.10%✓ OK
SQLStorm15 min60 min2001.40%✓ OK
tpcc15 min60 min2000.10%✓ OK
tpch15 min60 min2000.10%✓ OK
tpch1015 min60 min2000.10%✓ OK
WideWorldImporters50 min ⚠️15 min ⚠️1,000 ⚠️0.40%⚠️ Review
  1. Medium Standardize WideWorldImporters Query Store Settings

    WideWorldImporters has non-standard Query Store settings: a 50-minute flush interval (vs 15 min standard), 15-minute stats interval (vs 60 min), and 1,000 max plans per query (vs 200). These may be intentional but should be reviewed for consistency. A 50-minute flush interval increases data loss risk on unexpected restart; 1,000 plans per query can consume significant storage for high-variation queries.

    -- Standardize WideWorldImporters Query Store settings
    ALTER DATABASE WideWorldImporters SET QUERY_STORE (
        DATA_FLUSH_INTERVAL_SECONDS = 900,   -- 15 minutes
        INTERVAL_LENGTH_MINUTES = 60,
        MAX_PLANS_PER_QUERY = 200,
        QUERY_CAPTURE_MODE = AUTO,
        SIZE_BASED_CLEANUP_MODE = AUTO,
        STALE_QUERY_THRESHOLD_DAYS = 30
    );
    Confidence: Medium (75%) — verify if non-standard settings are intentional
  2. Low Leverage SQL Server 2022 IQP Features via Query Store

    With compatibility level 160 and Query Store enabled, databases automatically benefit from:

    • Parameter Sensitive Plan (PSP) optimization — multiple plans for skewed parameter distributions
    • Degree of Parallelism Feedback — automatic MAXDOP adjustment based on observed performance
    • Memory Grant Feedback — reduces over/under-grant memory issues
    • Cardinality Estimation Feedback — improves estimates over time

    No action required — these are active automatically. Monitor Query Store for plan force overrides that may be needed.

    Confidence: High (90%)

5. Database Integrity Maintenance (CHECKDB)

⚠️ CRITICAL: Six databases report a last CHECKDB date of January 1, 1900 — meaning DBCC CHECKDB has never been run on these databases on this instance. This is a significant data integrity risk. Undetected corruption can lead to silent data loss.
Database Last CHECKDB Days Since Status
DigitsSolverApr 12 2026 6:33 PM1✓ OK
WideWorldImportersJan 1 1900 (never)46,123🚨 Critical
SQLStormJan 1 1900 (never)46,123🚨 Critical
tpccJan 1 1900 (never)46,123🚨 Critical
tpchJan 1 1900 (never)46,123🚨 Critical
CRMJan 1 1900 (never)46,123🚨 Critical
tpch10Jan 1 1900 (never)46,123🚨 Critical
  1. Critical Run DBCC CHECKDB Immediately on All 6 Affected Databases

    Schedule and execute CHECKDB on all six databases. For large databases (tpch10, WideWorldImporters), use PHYSICAL_ONLY for a faster first pass, then schedule a full check.

    -- Quick physical check first (faster for large databases)
    DBCC CHECKDB ('WideWorldImporters') WITH PHYSICAL_ONLY, NO_INFOMSGS;
    DBCC CHECKDB ('SQLStorm') WITH PHYSICAL_ONLY, NO_INFOMSGS;
    DBCC CHECKDB ('tpcc') WITH PHYSICAL_ONLY, NO_INFOMSGS;
    DBCC CHECKDB ('tpch') WITH PHYSICAL_ONLY, NO_INFOMSGS;
    DBCC CHECKDB ('CRM') WITH PHYSICAL_ONLY, NO_INFOMSGS;
    DBCC CHECKDB ('tpch10') WITH PHYSICAL_ONLY, NO_INFOMSGS;
    
    -- Follow up with full logical check (can be scheduled during off-peak)
    DBCC CHECKDB ('WideWorldImporters') WITH NO_INFOMSGS, ALL_ERRORMSGS;
    -- (repeat for each database)
    Confidence: Very High (99%)
  2. Critical Establish a Weekly CHECKDB Maintenance Schedule

    Implement a SQL Server Agent job or Ola Hallengren's maintenance solution to run CHECKDB weekly on all user databases, with results logged to a maintenance log table.

    -- Example: Weekly CHECKDB using SQL Agent (run per database)
    USE msdb;
    EXEC sp_add_job @job_name = N'Weekly CHECKDB - All Databases';
    -- Configure steps to run DBCC CHECKDB for each user database
    -- Schedule: Weekly, Sunday 2:00 AM (or low-activity window)
    
    -- Recommended: Ola Hallengren's solution
    -- https://ola.hallengren.com/sql-server-integrity-check.html
    -- DatabaseIntegrityCheck @Databases = 'USER_DATABASES',
    --                         @CheckCommands = 'CHECKDB';
    Confidence: Very High (99%)

6. Security Enhancements

  1. Low Unencrypted Telemetry Connection — Minimal Risk (Informational)

    The NT SERVICE\SQLTELEMETRY connection (SQLServerCEIP) is unencrypted and uses shared memory transport. Per the analysis instructions, unencrypted telemetry connections are minimal risk as no user data is exposed, and shared memory connections are local-only (no network exposure). No action required.

    ✓ Telemetry connection via shared memory — local only, no user data. Acceptable.
    Confidence: High (92%)
  2. Low Sysadmin Membership Is Minimal — Appropriate

    Only RockyPC\k_a_f holds the sysadmin role. This is an excellent least-privilege posture. As the server hosts multiple databases, continue ensuring application logins use database-specific roles (db_datareader, db_datawriter, or custom roles) rather than sysadmin.

    ✓ 1 sysadmin member — minimal attack surface. Good practice maintained.
    Confidence: High (90%)
  3. Medium Mixed Authentication Mode Enabled — Review if Needed

    Integrated Security Only is No, meaning SQL Authentication is enabled alongside Windows Authentication. If no SQL logins are actively used, consider switching to Windows-only authentication to reduce the attack surface (especially important if this instance becomes network-accessible).

    -- Check for active SQL logins
    SELECT name, type_desc, is_disabled
    FROM sys.server_principals
    WHERE type = 'S'  -- SQL logins
      AND name NOT IN ('sa', '##MS_PolicyEventProcessingLogin##',
                       '##MS_PolicyTsqlExecutionLogin##')
      AND is_disabled = 0;
    
    -- If none needed, switch to Windows-only auth
    -- (requires SQL Server restart, done via SSMS Server Properties > Security)
    Confidence: Medium (75%)
  4. Medium Verify 'sa' Login is Disabled and Renamed

    As a security best practice, the sa account should be disabled and renamed to reduce brute-force risk. Given mixed-auth mode is enabled, this is especially important.

    -- Check sa login status
    SELECT name, is_disabled FROM sys.server_principals
    WHERE name = 'sa';
    
    -- Disable sa if not used
    ALTER LOGIN [sa] DISABLE;
    
    -- Or rename and disable for defense in depth
    ALTER LOGIN [sa] WITH NAME = [sa_disabled];
    ALTER LOGIN [sa_disabled] DISABLE;
    Confidence: High (85%)
  5. Medium Developer Edition Not Licensed for Production — Verify Usage

    SQL Server Developer Edition has all Enterprise features but is licensed for development and testing only — not production use. If this server hosts any production data or serves production applications, it must be upgraded to a production-licensed edition (Standard or Enterprise).

    ⚠️ Confirm this server is development/test only. License compliance is a legal and business risk.
    Confidence: Very High (95%) — licensing terms are clear

7. Concurrency Issues (Deadlocks, Locking)

No deadlocks detected in the past 7 days. No deadlock analysis action required.
  1. Low Enable Read Committed Snapshot Isolation (RCSI) for OLTP Databases

    For OLTP databases like tpcc and CRM, Read Committed Snapshot Isolation (RCSI) eliminates reader-writer blocking by using row versioning. This can dramatically reduce contention without application code changes. WideWorldImporters typically ships with snapshot isolation enabled.

    -- Check current isolation settings
    SELECT name, is_read_committed_snapshot_on, snapshot_isolation_state_desc
    FROM sys.databases
    WHERE name IN ('tpcc', 'CRM', 'SQLStorm');
    
    -- Enable RCSI (requires brief database single-user mode or tempdb version store growth)
    -- Enable during low-activity window
    ALTER DATABASE tpcc
        SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE;
    
    ALTER DATABASE CRM
        SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE;
    Confidence: Medium (72%) — verify application compatibility first
  2. Low Set Up Extended Events Session for Blocking Monitoring

    With the blocked process threshold set to 5 seconds (recommended above), create an Extended Events session to capture blocking chains for proactive monitoring, especially for the tpcc OLTP workload.

    -- Create lightweight blocking monitor session
    CREATE EVENT SESSION [BlockedProcessMonitor]
    ON SERVER
    ADD EVENT sqlserver.blocked_process_report
    ADD TARGET package0.ring_buffer(SET max_memory = 51200)
    WITH (MAX_DISPATCH_LATENCY = 5 SECONDS);
    
    ALTER EVENT SESSION [BlockedProcessMonitor]
        ON SERVER STATE = START;
    Confidence: High (85%)

8. Backup & Recovery

⚠️ Multiple critical backup gaps detected. Several databases have no recorded backup, and FULL recovery databases are accumulating un-backed-up transaction log, which prevents log space reuse.
Database Recovery Model Last Full Backup Log Backups Risk Level
masterSIMPLENeverN/A🚨 Critical
modelFULLNeverNone🚨 Critical
msdbSIMPLENeverN/A🚨 Critical
DigitsSolverFULLNeverNone🚨 Critical
CRMFULLNeverNone🚨 Critical
tpccFULLNeverNone🚨 Critical
tpch10FULLNeverNone🚨 Critical
tpchFULL142 days agoNoneHigh
SQLStormSIMPLE170 days agoN/AHigh
WideWorldImportersSIMPLE1,284 days agoN/A🚨 Critical
  1. Critical Take Immediate Full Backups of All Databases
    -- Immediate full backup of all user and system databases
    BACKUP DATABASE [master]
        TO DISK = N'C:\Backups\master_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [model]
        TO DISK = N'C:\Backups\model_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [msdb]
        TO DISK = N'C:\Backups\msdb_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [DigitsSolver]
        TO DISK = N'C:\Backups\DigitsSolver_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [CRM]
        TO DISK = N'C:\Backups\CRM_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [tpcc]
        TO DISK = N'C:\Backups\tpcc_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [tpch]
        TO DISK = N'C:\Backups\tpch_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [tpch10]
        TO DISK = N'C:\Backups\tpch10_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [WideWorldImporters]
        TO DISK = N'C:\Backups\WideWorldImporters_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP DATABASE [SQLStorm]
        TO DISK = N'C:\Backups\SQLStorm_full.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    Confidence: Very High (99%)
  2. Critical Implement Log Backup Schedule for FULL Recovery Databases

    Databases in FULL recovery model must have regular log backups to maintain the log backup chain, enable point-in-time recovery, and allow transaction log space reuse. Without log backups, the transaction log will grow indefinitely.

    -- Take log backups for FULL recovery databases (after full backup exists)
    -- Run every 15-60 minutes via SQL Agent job
    
    BACKUP LOG [DigitsSolver]
        TO DISK = N'C:\Backups\DigitsSolver_log.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP LOG [CRM]
        TO DISK = N'C:\Backups\CRM_log.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP LOG [tpcc]
        TO DISK = N'C:\Backups\tpcc_log.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP LOG [tpch]
        TO DISK = N'C:\Backups\tpch_log.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP LOG [tpch10]
        TO DISK = N'C:\Backups\tpch10_log.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    
    BACKUP LOG [model]
        TO DISK = N'C:\Backups\model_log.bak'
        WITH COMPRESSION, CHECKSUM, STATS = 10;
    💡 Alternative: If these are purely test/benchmark databases with no point-in-time recovery requirement, consider switching to SIMPLE recovery model to avoid log growth without backups.
    -- Only if no PITR is needed and data is expendable/reproducible
    ALTER DATABASE tpch SET RECOVERY SIMPLE;
    ALTER DATABASE tpch10 SET RECOVERY SIMPLE;
    -- (shrink log after switching)
    DBCC SHRINKFILE (N'tpch_log', 64);
    Confidence: Very High (98%)
  3. High Establish Automated Backup Schedule via SQL Server Agent

    Implement a comprehensive backup strategy using SQL Server Agent jobs or Ola Hallengren's DatabaseBackup solution (recommended):

    • Full backups: Weekly (Sunday, off-hours)
    • Differential backups: Daily (except full backup day)
    • Log backups: Every 15–30 minutes for FULL recovery databases
    • Backup verification: Weekly RESTORE VERIFYONLY on backup files
    • Backup retention: Minimum 14 days; copy backups offsite or to Azure Blob Storage
    -- Recommended: Ola Hallengren's backup solution
    -- https://ola.hallengren.com/sql-server-backup.html
    
    -- Example DatabaseBackup call for full backups
    EXECUTE dbo.DatabaseBackup
        @Databases = 'USER_DATABASES',
        @BackupType = 'FULL',
        @Directory = N'C:\Backups',
        @Compress = 'Y',
        @Verify = 'Y',
        @CheckSum = 'Y',
        @CleanupTime = 336;  -- Keep 14 days
    Confidence: Very High (98%)

9. Operational Best Practices

  1. Medium Move System Database Files Off Data Drive (if Applicable)

    System databases (master, model, msdb, tempdb) share the C:\Data\MSSQL16.MSSQLSERVER\MSSQL\DATA\ path with user database data files on the same C:\Data\ volume. While I/O performance is currently excellent, mixing system and user databases on the same volume can complicate capacity management. If disk space becomes constrained, consider separating system databases to their own volume.

    Confidence: Low (60%) — current I/O is healthy, low urgency
  2. Medium Enable Database Mail for Alerting

    Database Mail XPs are disabled. Without email alerting, SQL Server Agent job failures, blocked processes, and integrity check errors will go unnoticed. Enable Database Mail and configure operator notifications for critical events.

    -- Enable Database Mail
    EXEC sp_configure 'Database Mail XPs', 1;
    RECONFIGURE;
    
    -- Configure via SSMS: Management > Database Mail > Configure Database Mail
    -- Then set up SQL Agent Operators and Alerts for:
    -- Severity 16+ errors, job failures, and integrity check results
    Confidence: High (88%)
  3. Low Separate Log Files to a Dedicated Log Drive

    Transaction logs (*.ldf) are on C:\Logs\ — separate from data files on C:\Data\. This is good practice and is already implemented. Ensure the Logs volume has sufficient free space to accommodate log growth, particularly for FULL recovery databases pending log backup setup.

    ✓ Data and log files are on separate volumes — best practice maintained.
    Confidence: High (90%)
  4. Low Collation Consideration for New Databases

    Server collation is SQL_Latin1_General_CP1_CI_AS, which is the legacy default. For new databases on SQL Server 2022, consider using Latin1_General_100_CI_AS_SC_UTF8 or another modern collation for better Unicode support. Existing databases should retain their current collation unless a migration is planned.

    Confidence: Low (55%) — informational for new database creation only

10. Proactive Maintenance Suggestions

  1. Medium Implement Comprehensive Index Maintenance Schedule

    Establish automated index maintenance (rebuild/reorganize) and statistics updates. Ola Hallengren's IndexOptimize is the industry standard:

    • Rebuild indexes with fragmentation >30%
    • Reorganize indexes with fragmentation 10–30%
    • Update statistics on tables not covered by index rebuild
    • Schedule weekly during maintenance window
    -- Ola Hallengren IndexOptimize
    -- https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html
    EXECUTE dbo.IndexOptimize
        @Databases = 'USER_DATABASES',
        @FragmentationLow = NULL,
        @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
        @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE',
        @FragmentationLevel1 = 10,
        @FragmentationLevel2 = 30,
        @UpdateStatistics = 'ALL',
        @OnlyModifiedStatistics = 'Y';
    Confidence: High (88%)
  2. Medium Monitor tempdb Usage as Workload Grows

    tempdb has 8 data files (tempdev + temp2–temp8) which matches a standard recommendation of number of files = min(logical CPUs, 8). This is already correctly configured. Monitor for tempdb version store growth if RCSI is enabled on OLTP databases.

    -- Monitor tempdb version store usage
    SELECT SUM(version_store_reserved_page_count) * 8.0 / 1024 AS version_store_MB,
           SUM(user_object_reserved_page_count) * 8.0 / 1024 AS user_objects_MB,
           SUM(internal_object_reserved_page_count) * 8.0 / 1024 AS internal_objects_MB
    FROM sys.dm_db_file_space_usage;
    ✓ tempdb file count (8) is appropriate for 16 logical CPUs.
    Confidence: High (87%)
  3. Low Enable SQL Server Audit for Sysadmin and DDL Activity

    As a developer instance with multiple databases, consider enabling SQL Server Audit to track schema changes, permission escalations, and login activity. This provides an audit trail for compliance and troubleshooting.

    -- Create server audit
    CREATE SERVER AUDIT [SecurityAudit]
    TO FILE (FILEPATH = N'C:\Logs\Audit\')
    WITH (ON_FAILURE = CONTINUE);
    
    ALTER SERVER AUDIT [SecurityAudit] WITH (STATE = ON);
    
    -- Create server audit specification for privileged activity
    CREATE SERVER AUDIT SPECIFICATION [PrivilegedActivity]
    FOR SERVER AUDIT [SecurityAudit]
    ADD (SERVER_ROLE_MEMBER_CHANGE_GROUP),
    ADD (SERVER_PERMISSION_CHANGE_GROUP),
    ADD (SUCCESSFUL_LOGIN_GROUP),
    ADD (FAILED_LOGIN_GROUP)
    WITH (STATE = ON);
    Confidence: Medium (72%)
  4. Low Re-Run Full Server Health Analysis After 7+ Days of Workload

    Many DMV-based metrics (missing indexes, wait statistics, plan cache patterns, I/O patterns) are most informative after sustained representative workload. With only 1 day of uptime, the current baseline is limited. Schedule a full re-analysis after at least 7 days to:

    • Identify missing index opportunities from accumulated DMV data
    • Review wait statistics for user-query bottlenecks
    • Assess plan cache efficiency post "optimize for ad hoc workloads" change
    • Validate backup and CHECKDB job execution success
    Confidence: High (90%)
  5. Low Configure Disk-Level Monitoring and Alerting

    All databases reside on C:\Data\ and C:\Logs\. Monitor disk free space and set alerts at 20% and 10% thresholds. Transaction log growth from unbackedup FULL recovery databases (tpcc, tpch, etc.) can rapidly consume disk space.

    -- Check drive free space
    EXEC xp_fixeddrives;
    
    -- Or use WMI/PowerShell-based monitoring
    -- Set alerts when C:\ drops below 20% free capacity
    Confidence: High (88%)
Generated by AI SQL Tuner Studio using model claude-sonnet-4-6 with reasoning effort high | aisqltuner.com
Based on analysis as of 2026-04-13 20:58:48 UTC
```