πŸ“‹ Executive Summary

🚨 Critical: DBCC CHECKDB has never been run on all 7 user databases (DigitsSolver, WideWorldImporters, SQLStorm, tpcc, tpch, CRM, tpch10). The metadata shows a last-run date of January 1, 1900, indicating no integrity check has ever been recorded. This is the highest operational risk on this server.
🚨 Critical: Multiple databases have no backups whatsoever (DigitsSolver, tpcc, CRM, tpch10, master, model, msdb). WideWorldImporters last backup was over 3 years ago (Oct 2022). This server has no credible recovery capability for most databases.
⚠️ Warning: Several databases are in FULL recovery model but have no transaction log backups (DigitsSolver, tpcc, tpch, CRM, tpch10, model). Log files will grow unbounded until backing up or switching to SIMPLE recovery.
ℹ️ Info: Wait statistics are dominated by benign background waits (QDS persistence sleep, log consumer). I/O latencies are excellent across all files. Memory committed (526 MB) is extremely low relative to the 23,168 MB max server memory cap β€” the server has very light current workload. No deadlocks, no missing indexes, and no concurrency issues are detected.

This is a developer/test machine (Windows 10 Home, Developer Edition, Hypervisor) with a very low-activity workload. The dominant risks are operational: lack of integrity checks and backup coverage, not performance. Configuration improvements are incremental opportunities.

πŸ” Top Priorities at a Glance

01
Run DBCC CHECKDB Immediately
All 7 user databases have never had integrity verified. Data corruption would go undetected.
02
Establish Backup Strategy
Most databases have no backups. Several have full recovery with no log backups (log growth risk).
03
Fix Log Backup / Recovery Model Mismatch
DigitsSolver, tpcc, tpch, CRM, tpch10, model are FULL recovery with no log backups.
04
Enable Backup Compression
backup compression default = 0. Enable to reduce backup size and I/O on this storage.
05
Enable Optimize for Ad Hoc Workloads
Plan cache efficiency improvement β€” safe, zero-downtime change.
06
Standardize Query Store Settings
WideWorldImporters has non-standard QS config (compat level 130, different flush interval).

⚑ 1. Performance Improvements (CPU, Memory, I/O)

Low Wait Statistics: Benign Background Waits Confidence: High

The two dominant wait types consume nearly 100% of recorded wait time:

  • PWAIT_DIRECTLOGCONSUMER_GETNEXT (49.88%): Background log consumer thread waiting for new log records. This is normal internal SQL Server activity, not a user workload bottleneck.
  • QDS_PERSIST_TASK_MAIN_LOOP_SLEEP (49.68%): Query Data Store background flush thread sleeping between persistence cycles. Normal with Query Store enabled across all databases.

These waits indicate the server is essentially idle from a workload perspective. No performance action is needed for wait statistics.

Low I/O Latency: Excellent β€” No Action Required Confidence: High

All database files show excellent I/O latency. Thresholds for concern are typically >20ms for data files and >10ms for log files:

Database / FileAvg Read LatencyAvg Write LatencyStatus
SQLStorm (ROWS)0 ms0 msβœ“ Excellent
tpch (ROWS)0 ms0 msβœ“ Excellent
tpcc (ROWS)0 ms0 msβœ“ Excellent
WideWorldImporters (LOG)6 ms0 msβœ“ Good
tpcc (LOG)4 ms0 msβœ“ Good
All other files≀ 4 ms0 msβœ“ Excellent

Storage performance is healthy. No I/O tuning is required at this time.

Low Memory: Very Low Current Utilization β€” Review Max Server Memory Confidence: Medium

Committed memory is only 526 MB against a max server memory cap of 23,168 MB. This is expected for a developer machine under light/no workload. The max server memory setting of 23,168 MB leaves approximately 9,359 MB for the OS on a 32,527 MB machine, which is a reasonable split.

  • No memory pressure is detected; Buffer Pool Lazy Writer is not under stress.
  • If this machine is used interactively (Windows 10 Home desktop), consider monitoring OS memory usage under concurrent workloads and adjusting the cap down if needed.
  • Recommended: Max 23,168 MB is acceptable. Min server memory = 0 (effective running value 16 MB) is fine for a dev/test instance.

βš™οΈ 2. Configuration Optimizations

Medium Enable "Optimize for Ad Hoc Workloads" Confidence: High

Currently disabled (value = 0). This feature stores only a plan stub on first execution of an ad hoc query, saving plan cache memory for single-use query plans. Safe zero-downtime change with immediate effect.

-- Enable optimize for ad hoc workloads
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

EXEC sp_configure 'optimize for ad hoc workloads', 1;
RECONFIGURE;

EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
Medium
Medium Enable Backup Compression Default Confidence: High

Currently disabled (value = 0). Backup compression reduces backup size by 30–80% for typical workloads and reduces backup I/O duration. This is a server-level default that can be overridden per-backup statement.

-- Enable backup compression default
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

EXEC sp_configure 'backup compression default', 1;
RECONFIGURE;

EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
Medium Enable Backup Checksum Default Confidence: High

backup checksum default = 0. Enabling checksums on backups allows detection of page corruption during backup and restore operations, improving data integrity assurance.

-- Enable backup checksum default
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

EXEC sp_configure 'backup checksum default', 1;
RECONFIGURE;

EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
Low Enable Blocked Process Threshold Confidence: Medium

Currently blocked process threshold = 0 (disabled). Setting this to 5–15 seconds enables the Blocked Process Report event in Extended Events, helping identify blocking chains during future workload testing.

-- Enable blocked process threshold (5 seconds)
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

EXEC sp_configure 'blocked process threshold (s)', 5;
RECONFIGURE;

EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
Low Verify Cost Threshold for Parallelism (Currently 30) Confidence: Medium

Current value is 30, which is a good baseline (SQL Server default is 5, which is widely considered too low). With a 16-logical-core machine and Enterprise Edition capabilities, the current setting of 30 is reasonable. Monitor for excessive parallelism on the TPC workloads; values of 40–50 may be worth testing.

  • MAXDOP = 8 is set; for a single physical processor with 16 logical cores, MAXDOP 8 is reasonable per SQL Server guidance (≀ half the logical CPUs per NUMA node).
  • No changes required at this time.
Low Enable Memory-Optimized tempdb Metadata Confidence: Medium

tempdb metadata memory-optimized = 0. SQL Server 2019+ supports memory-optimized tempdb metadata, which can reduce contention on system tables (sysschobjs, sysobjvalues) under heavy tempdb workloads. The current tempdb workload is minimal (8 data files, very low I/O), so this is low priority but worth enabling proactively for future workload tests.

-- Enable memory-optimized tempdb metadata (requires restart)
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

EXEC sp_configure 'tempdb metadata memory-optimized', 1;
RECONFIGURE;

EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
-- NOTE: SQL Server restart required for this change to take effect.

πŸ—‚οΈ 3. Index Optimization Opportunities

βœ“ No High-Impact Missing Indexes Detected. The sys.dm_db_missing_index_details DMV reports no high-value missing indexes across all databases. Index coverage appears adequate for the current observed workload.
Low Proactive: Review Index Fragmentation on Active Databases Confidence: Medium

Since CHECKDB has never been run and no maintenance plan is in place, index fragmentation is also unknown. After establishing a backup and integrity baseline, schedule index maintenance on active databases: SQLStorm (highest I/O), WideWorldImporters, tpcc, tpch, and tpch10.

-- Check index fragmentation for a specific database (example: SQLStorm)
USE SQLStorm;
SELECT
    OBJECT_NAME(ips.object_id)        AS table_name,
    i.name                             AS index_name,
    ips.index_type_desc,
    CAST(ips.avg_fragmentation_in_percent AS DECIMAL(5,1)) AS fragmentation_pct,
    ips.page_count
FROM sys.dm_db_index_physical_stats(
        DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
INNER JOIN sys.indexes i
    ON ips.object_id = i.object_id
    AND ips.index_id  = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10
  AND ips.page_count > 100
ORDER BY ips.avg_fragmentation_in_percent DESC;

General guidelines:

  • 10–30% fragmentation: REORGANIZE the index.
  • >30% fragmentation: REBUILD the index (use ONLINE = ON where possible β€” available in Enterprise/Developer Edition).

πŸ“Š 4. Query Store Optimization

Medium Standardize WideWorldImporters Query Store Configuration Confidence: High

WideWorldImporters has non-standard Query Store settings compared to all other databases, and also has a lower compatibility level (130 = SQL Server 2016) which limits access to SQL Server 2022 IQP features:

SettingWideWorldImportersAll Other DBs
Compatibility Level130160
QS Max Storage500 MB1,000 MB
Flush Interval50 minutes15 minutes
Stats Collection Interval15 minutes60 minutes
Max Plans Per Query1,000200

Upgrade compatibility level to 160 to enable Parameter Sensitive Plan optimization, DOP Feedback, and other SQL 2022 IQP enhancements. Standardize QS settings for consistent management.

-- Upgrade WideWorldImporters compatibility level
ALTER DATABASE WideWorldImporters
    SET COMPATIBILITY_LEVEL = 160;

-- Standardize Query Store settings
ALTER DATABASE WideWorldImporters
SET QUERY_STORE = ON (
    OPERATION_MODE          = READ_WRITE,
    MAX_STORAGE_SIZE_MB     = 1000,
    INTERVAL_LENGTH_MINUTES = 60,
    DATA_FLUSH_INTERVAL_SECONDS = 900,  -- 15 minutes
    STALE_QUERY_THRESHOLD_DAYS  = 30,
    SIZE_BASED_CLEANUP_MODE = AUTO,
    QUERY_CAPTURE_MODE      = AUTO,
    MAX_PLANS_PER_QUERY     = 200
);
⚠️ Caution: Changing compatibility level from 130 to 160 may alter query plan choices. Test workloads against WideWorldImporters after the change. Use Query Store to monitor plan regressions and force prior plans if needed.
Low Query Store Settings for Remaining Databases Are Well-Configured Confidence: High

All other databases (CRM, DigitsSolver, SQLStorm, tpcc, tpch, tpch10) have consistent, sensible Query Store settings:

  • State: READ_WRITE βœ“
  • Compatibility Level: 160 βœ“
  • Storage utilization: all under 2% βœ“
  • Cleanup Mode: AUTO βœ“
  • Capture Mode: AUTO βœ“

No changes required for these databases' Query Store configuration.

πŸ” 5. Database Integrity Maintenance (CHECKDB)

Critical DBCC CHECKDB Has Never Run on Any User Database Confidence: High

All 7 user databases show a last CHECKDB date of January 1, 1900, indicating integrity checks have never been recorded. This is a critical operational risk β€” undetected corruption could lead to unrecoverable data loss, especially combined with the lack of valid backups.

The Jan 1, 1900 date is the SQL Server sentinel value indicating CHECKDB has never been run (or the database was restored/attached without running CHECKDB). This is not a "long time ago" β€” it means never.

Affected Databases

DatabaseLast CHECKDBRisk Level
DigitsSolverNeverCritical
WideWorldImportersNeverCritical
SQLStormNeverCritical
tpccNeverCritical
tpchNeverCritical
CRMNeverCritical
tpch10NeverCritical

Immediate Action: Run CHECKDB on All Databases

-- Run CHECKDB with DATA_PURITY to also check column value integrity
-- Run during low-activity period. NOLOCK reduces blocking but may miss
-- some consistency checks β€” use WITH NO_INFOMSGS to reduce output noise.

DBCC CHECKDB (DigitsSolver)       WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (WideWorldImporters) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (SQLStorm)           WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (tpcc)               WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (tpch)               WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (CRM)                WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (tpch10)             WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;

Recommended Ongoing Schedule

Use SQL Server Agent to schedule weekly CHECKDB runs. For larger databases, consider CHECKFILEGROUP or CHECKTABLE approaches to distribute the workload.

-- SQL Server Agent Job Step example for weekly CHECKDB
-- (Create via SSMS or T-SQL; shown here as T-SQL logic)

-- Simple scheduled job step:
EXEC msdb.dbo.sp_add_job
    @job_name = N'Weekly_CHECKDB_AllUserDBs';

EXEC msdb.dbo.sp_add_jobstep
    @job_name = N'Weekly_CHECKDB_AllUserDBs',
    @step_name = N'Run CHECKDB',
    @subsystem = N'TSQL',
    @command = N'
        DECLARE @db_name NVARCHAR(128);
        DECLARE db_cursor CURSOR FOR
            SELECT name FROM sys.databases
            WHERE database_id > 4
              AND state_desc = ''ONLINE''
              AND is_read_only = 0;

        OPEN db_cursor;
        FETCH NEXT FROM db_cursor INTO @db_name;
        WHILE @@FETCH_STATUS = 0
        BEGIN
            EXEC (''DBCC CHECKDB ('' + QUOTENAME(@db_name)
                  + '') WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY'');
            FETCH NEXT FROM db_cursor INTO @db_name;
        END
        CLOSE db_cursor;
        DEALLOCATE db_cursor;
    ';

EXEC msdb.dbo.sp_add_schedule
    @schedule_name = N'Weekly_Sunday_2AM',
    @freq_type = 8,
    @freq_interval = 1,   -- Sunday
    @active_start_time = 020000;

EXEC msdb.dbo.sp_attach_schedule
    @job_name = N'Weekly_CHECKDB_AllUserDBs',
    @schedule_name = N'Weekly_Sunday_2AM';

EXEC msdb.dbo.sp_add_jobserver
    @job_name = N'Weekly_CHECKDB_AllUserDBs';

πŸ”’ 6. Security Enhancements

Low Unencrypted Connection: SQLTelemetry via Shared Memory β€” Acceptable Risk Confidence: High

One unencrypted connection was detected: NT SERVICE\SQLTELEMETRY via Shared Memory transport (local machine only). Per analysis guidelines:

  • This is a shared memory transport β€” data never leaves the local machine kernel and cannot be intercepted over a network.
  • The login is NT SERVICE\SQLTELEMETRY (SQL Server CEIP telemetry service) β€” no user data is exposed.
  • Risk assessment: Minimal / Acceptable. No action required for this specific connection.
Both detected sessions use Shared Memory (Port: 0), which is local-only. Neither session represents a network-exposed unencrypted connection. The "50% unencrypted" statistic is misleading β€” there are only 2 sessions, both local.
Low Sysadmin Membership: Single Member β€” Good Practice Confidence: High

Only one sysadmin member: RockyPC\k_a_f. This is a Windows-authenticated local user account, which is appropriate for a developer workstation. The number is minimal and follows least-privilege principles for a single-user dev machine.

  • No excess sysadmin accounts detected.
  • SA account status and SQL logins not detailed in the data β€” verify SA account is disabled if not needed.
-- Verify SA account is disabled (best practice if not using SQL auth)
SELECT name, is_disabled
FROM sys.sql_logins
WHERE name = 'sa';

-- Disable SA if not needed
-- ALTER LOGIN sa DISABLE;
Low Mixed Authentication Mode Enabled Confidence: Medium

Integrated Security Only = No β€” SQL Server is running in Mixed Authentication Mode (both SQL and Windows logins accepted). For a developer machine this is commonly needed, but consider:

  • Ensure all SQL logins have strong passwords.
  • Review and disable any unused SQL logins.
  • If only Windows auth is needed, switch to Windows Authentication Mode only.
-- Audit SQL logins (non-Windows)
SELECT name, is_disabled, create_date, modify_date,
       LOGINPROPERTY(name, 'PasswordLastSetTime') AS pwd_last_set
FROM sys.sql_logins
ORDER BY name;
Medium Consider Enabling Force Encryption for Future Network Connections Confidence: Medium

Currently both active connections are local (shared memory). If this server is ever accessed remotely or promoted to a shared dev/test environment, enforce TLS encryption:

  • Install a server certificate (self-signed is acceptable for dev/test).
  • Enable Force Encryption = Yes in SQL Server Configuration Manager β†’ SQL Server Network Configuration β†’ Protocols β†’ Properties.
  • Update all client connection strings with Encrypt=True; TrustServerCertificate=True for dev/test, or install a trusted CA cert for production-like environments.

πŸ”„ 7. Concurrency Issues (Deadlocks, Locking Patterns)

βœ“ No Deadlocks Detected in the past 7 days. No deadlock analysis or remediation is required at this time.
βœ“ No Significant Blocking Patterns. With only 2 active sessions (both local/background), there is no observable locking or blocking to address.
Low Proactive: Set Up Extended Events for Deadlock Monitoring Confidence: High

As workload testing increases (SQLStorm, tpcc, tpch workloads), proactively configure deadlock monitoring via Extended Events to capture future events:

-- Create a lightweight deadlock capture session
CREATE EVENT SESSION [Deadlock_Monitor] ON SERVER
ADD EVENT sqlserver.xml_deadlock_report(
    ACTION (sqlserver.database_name,
            sqlserver.sql_text,
            sqlserver.username)
)
ADD TARGET package0.ring_buffer(SET max_memory = 4096)
WITH (MAX_DISPATCH_LATENCY = 5 SECONDS);

ALTER EVENT SESSION [Deadlock_Monitor] ON SERVER STATE = START;

πŸ› οΈ 8. Operational Best Practices

Critical Establish Comprehensive Backup Strategy Immediately Confidence: High

The backup situation across this server is severely deficient. No recovery is possible for most databases in the event of data loss or corruption:

DatabaseRecovery ModelLast Full BackupIssue
masterSIMPLENeverNo backup
modelFULLNeverNo backup, no log backups
msdbSIMPLENeverNo backup
DigitsSolverFULLNeverNo backup, no log backups
WideWorldImportersSIMPLEOct 2022 (1,283 days ago)Severely outdated
SQLStormSIMPLEOct 2025 (169 days ago)No recent backup
tpccFULLNeverNo backup, no log backups
tpchFULLNov 2025 (141 days ago)No log backups
CRMFULLNeverNo backup, no log backups
tpch10FULLNeverNo backup, no log backups

Immediate: Take Full Backups of All Databases

-- Take immediate full backups (with compression enabled after sp_configure change above)
BACKUP DATABASE [master]
    TO DISK = N'C:\Backups\master_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [model]
    TO DISK = N'C:\Backups\model_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [msdb]
    TO DISK = N'C:\Backups\msdb_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [DigitsSolver]
    TO DISK = N'C:\Backups\DigitsSolver_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [WideWorldImporters]
    TO DISK = N'C:\Backups\WideWorldImporters_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [SQLStorm]
    TO DISK = N'C:\Backups\SQLStorm_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [tpcc]
    TO DISK = N'C:\Backups\tpcc_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [tpch]
    TO DISK = N'C:\Backups\tpch_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [CRM]
    TO DISK = N'C:\Backups\CRM_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

BACKUP DATABASE [tpch10]
    TO DISK = N'C:\Backups\tpch10_full.bak'
    WITH COMPRESSION, CHECKSUM, STATS = 10;

Address Recovery Model / Log Backup Mismatch

Databases in FULL recovery model without log backups have log files that grow continuously and cannot be truncated. Either implement log backups or switch to SIMPLE recovery (if point-in-time recovery is not needed):

-- Option A: Switch to SIMPLE recovery for dev/test databases
-- (choose this if point-in-time recovery is not required)
ALTER DATABASE DigitsSolver SET RECOVERY SIMPLE;
ALTER DATABASE tpcc         SET RECOVERY SIMPLE;
ALTER DATABASE CRM          SET RECOVERY SIMPLE;
ALTER DATABASE tpch10       SET RECOVERY SIMPLE;
-- Note: After taking a full backup first

-- Option B: Keep FULL recovery and add log backups
-- (choose this if you need point-in-time recovery)
BACKUP LOG [DigitsSolver]
    TO DISK = N'C:\Backups\DigitsSolver_log.bak'
    WITH COMPRESSION, CHECKSUM;
-- Then schedule regular log backups (every 15-60 minutes)
Medium Apply Latest SQL Server 2022 Cumulative Update Confidence: High

Running SQL Server 2022 RTM-GDR (KB5077465, 16.0.1170.5). This is a GDR (General Distribution Release / security-only patching branch). As of the analysis date, SQL Server 2022 Cumulative Updates are available through CU16+. The current CU branch includes stability, performance, and security fixes beyond the GDR track.

  • Visit Microsoft SQL Server 2022 Build Versions to identify the latest CU.
  • Test on dev machine (this server) before applying to production equivalents.
  • For a developer workstation, updating to the latest CU is lower risk and recommended.
Low System Databases: master and mastlog on C:\Data (Non-Standard Path) Confidence: Low

System database files (master.mdf, mastlog.ldf, MSDBData.mdf, MSDBLog.ldf, tempdb, model) are located at C:\Data\MSSQL16.MSSQLSERVER\MSSQL\DATA\ which is a non-default path (default would be the SQL Server install directory). This is acceptable if intentionally configured, but ensure this path is included in backup scope and has appropriate access permissions.

πŸ“… 9. Proactive Maintenance Suggestions

Medium Implement SQL Server Agent Maintenance Plan Confidence: High

No evidence of any SQL Server Agent maintenance jobs. Implement a comprehensive maintenance schedule:

TaskFrequencyNotes
DBCC CHECKDB (all user DBs)Weekly (Sunday 2:00 AM)Critical β€” currently never run
Full Database Backup (all DBs)Weekly (Saturday 11:00 PM)Include compression + checksum
Differential Backup (key DBs)Daily (11:00 PM)For SQLStorm, CRM, DigitsSolver
Log Backup (FULL recovery DBs)Every 30-60 minutesIf keeping FULL recovery model
Index Reorganize (>10% frag)WeeklyAfter CHECKDB baseline established
Index Rebuild (>30% frag)WeeklyONLINE=ON available in Developer Ed
Update StatisticsWeekly (or after index rebuild)FULLSCAN on smaller tables
Clean up backup history (msdb)Monthlysp_delete_backuphistory
Low Monitor tempdb Configuration β€” Currently Adequate Confidence: High

tempdb has 8 data files (temp2–temp8 + tempdev), which aligns with the recommended practice of matching logical CPU count (up to 8 files for CPUs >8). Current tempdb I/O is extremely low. No changes required. If heavy tempdb workloads develop from TPC benchmarks, consider enabling memory-optimized tempdb metadata (Section 2).

Low Leverage SQL Server 2022 Intelligent Query Processing Features Confidence: Medium

With all databases at compatibility level 160 (except WideWorldImporters), the following SQL Server 2022 IQP features are available that can benefit TPC workloads and analytical queries:

  • Parameter Sensitive Plan (PSP) Optimization: Automatically creates multiple plans for queries with parameter skew (beneficial for tpcc/tpch workloads).
  • Degree of Parallelism (DOP) Feedback: Automatically adjusts parallelism for individual queries based on observed performance (beneficial on the 16-core machine).
  • Memory Grant Feedback (persistent, via Query Store): Persists memory grant adjustments across restarts.
  • Cardinality Estimation Feedback: Corrects CE model errors over time.

These features are automatically active at compatibility level 160 with Query Store enabled. No additional configuration is needed β€” they will engage as workloads run and Query Store accumulates data.

Low Consider Enabling Database Mail for Alert Notifications Confidence: Low

Database Mail XPs = 0. Enabling Database Mail and configuring SQL Server Agent alerts for critical conditions (severity 16+, severity 17+, specific error numbers) provides proactive notification of issues:

  • Backup failure alerts
  • CHECKDB failure / corruption detected
  • Disk space warnings
  • SQL Server error log severity 17–25 events

This is optional for a developer workstation but useful if the machine runs unattended for extended periods.

```