AI SQL Tuner Recommendations
Executive Summary
This is a developer/test machine (Windows 10 Home, Developer Edition, Hypervisor) with a very low-activity workload. The dominant risks are operational: lack of integrity checks and backup coverage, not performance. Configuration improvements are incremental opportunities.
Top Priorities at a Glance
1. Performance Improvements (CPU, Memory, I/O)
The two dominant wait types consume nearly 100% of recorded wait time:
- PWAIT_DIRECTLOGCONSUMER_GETNEXT (49.88%): Background log consumer thread waiting for new log records. This is normal internal SQL Server activity, not a user workload bottleneck.
- QDS_PERSIST_TASK_MAIN_LOOP_SLEEP (49.68%): Query Data Store background flush thread sleeping between persistence cycles. Normal with Query Store enabled across all databases.
These waits indicate the server is essentially idle from a workload perspective. No performance action is needed for wait statistics.
All database files show excellent I/O latency. Thresholds for concern are typically >20ms for data files and >10ms for log files:
| Database / File | Avg Read Latency | Avg Write Latency | Status |
|---|---|---|---|
| SQLStorm (ROWS) | 0 ms | 0 ms | β Excellent |
| tpch (ROWS) | 0 ms | 0 ms | β Excellent |
| tpcc (ROWS) | 0 ms | 0 ms | β Excellent |
| WideWorldImporters (LOG) | 6 ms | 0 ms | β Good |
| tpcc (LOG) | 4 ms | 0 ms | β Good |
| All other files | β€ 4 ms | 0 ms | β Excellent |
Storage performance is healthy. No I/O tuning is required at this time.
Committed memory is only 526 MB against a max server memory cap of 23,168 MB. This is expected for a developer machine under light/no workload. The max server memory setting of 23,168 MB leaves approximately 9,359 MB for the OS on a 32,527 MB machine, which is a reasonable split.
- No memory pressure is detected; Buffer Pool Lazy Writer is not under stress.
- If this machine is used interactively (Windows 10 Home desktop), consider monitoring OS memory usage under concurrent workloads and adjusting the cap down if needed.
- Recommended: Max 23,168 MB is acceptable. Min server memory = 0 (effective running value 16 MB) is fine for a dev/test instance.
2. Configuration Optimizations
Currently disabled (value = 0). This feature stores only a plan stub on first execution of an ad hoc query, saving plan cache memory for single-use query plans. Safe zero-downtime change with immediate effect.
-- Enable optimize for ad hoc workloads
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'optimize for ad hoc workloads', 1;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
Currently disabled (value = 0). Backup compression reduces backup size by 30β80% for typical workloads and reduces backup I/O duration. This is a server-level default that can be overridden per-backup statement.
-- Enable backup compression default
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'backup compression default', 1;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
backup checksum default = 0. Enabling checksums on backups allows detection of page corruption during backup and restore operations, improving data integrity assurance.
-- Enable backup checksum default
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'backup checksum default', 1;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
Currently blocked process threshold = 0 (disabled). Setting this to 5β15 seconds enables the Blocked Process Report event in Extended Events, helping identify blocking chains during future workload testing.
-- Enable blocked process threshold (5 seconds)
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'blocked process threshold (s)', 5;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
Current value is 30, which is a good baseline (SQL Server default is 5, which is widely considered too low). With a 16-logical-core machine and Enterprise Edition capabilities, the current setting of 30 is reasonable. Monitor for excessive parallelism on the TPC workloads; values of 40β50 may be worth testing.
- MAXDOP = 8 is set; for a single physical processor with 16 logical cores, MAXDOP 8 is reasonable per SQL Server guidance (β€ half the logical CPUs per NUMA node).
- No changes required at this time.
tempdb metadata memory-optimized = 0. SQL Server 2019+ supports memory-optimized tempdb metadata, which can reduce contention on system tables (sysschobjs, sysobjvalues) under heavy tempdb workloads. The current tempdb workload is minimal (8 data files, very low I/O), so this is low priority but worth enabling proactively for future workload tests.
-- Enable memory-optimized tempdb metadata (requires restart)
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
EXEC sp_configure 'tempdb metadata memory-optimized', 1;
RECONFIGURE;
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
-- NOTE: SQL Server restart required for this change to take effect.
3. Index Optimization Opportunities
Since CHECKDB has never been run and no maintenance plan is in place, index fragmentation is also unknown. After establishing a backup and integrity baseline, schedule index maintenance on active databases: SQLStorm (highest I/O), WideWorldImporters, tpcc, tpch, and tpch10.
-- Check index fragmentation for a specific database (example: SQLStorm)
USE SQLStorm;
SELECT
OBJECT_NAME(ips.object_id) AS table_name,
i.name AS index_name,
ips.index_type_desc,
CAST(ips.avg_fragmentation_in_percent AS DECIMAL(5,1)) AS fragmentation_pct,
ips.page_count
FROM sys.dm_db_index_physical_stats(
DB_ID(), NULL, NULL, NULL, 'LIMITED') ips
INNER JOIN sys.indexes i
ON ips.object_id = i.object_id
AND ips.index_id = i.index_id
WHERE ips.avg_fragmentation_in_percent > 10
AND ips.page_count > 100
ORDER BY ips.avg_fragmentation_in_percent DESC;
General guidelines:
- 10β30% fragmentation: REORGANIZE the index.
- >30% fragmentation: REBUILD the index (use ONLINE = ON where possible β available in Enterprise/Developer Edition).
4. Query Store Optimization
WideWorldImporters has non-standard Query Store settings compared to all other databases, and also has a lower compatibility level (130 = SQL Server 2016) which limits access to SQL Server 2022 IQP features:
| Setting | WideWorldImporters | All Other DBs |
|---|---|---|
| Compatibility Level | 130 | 160 |
| QS Max Storage | 500 MB | 1,000 MB |
| Flush Interval | 50 minutes | 15 minutes |
| Stats Collection Interval | 15 minutes | 60 minutes |
| Max Plans Per Query | 1,000 | 200 |
Upgrade compatibility level to 160 to enable Parameter Sensitive Plan optimization, DOP Feedback, and other SQL 2022 IQP enhancements. Standardize QS settings for consistent management.
-- Upgrade WideWorldImporters compatibility level
ALTER DATABASE WideWorldImporters
SET COMPATIBILITY_LEVEL = 160;
-- Standardize Query Store settings
ALTER DATABASE WideWorldImporters
SET QUERY_STORE = ON (
OPERATION_MODE = READ_WRITE,
MAX_STORAGE_SIZE_MB = 1000,
INTERVAL_LENGTH_MINUTES = 60,
DATA_FLUSH_INTERVAL_SECONDS = 900, -- 15 minutes
STALE_QUERY_THRESHOLD_DAYS = 30,
SIZE_BASED_CLEANUP_MODE = AUTO,
QUERY_CAPTURE_MODE = AUTO,
MAX_PLANS_PER_QUERY = 200
);
All other databases (CRM, DigitsSolver, SQLStorm, tpcc, tpch, tpch10) have consistent, sensible Query Store settings:
- State: READ_WRITE β
- Compatibility Level: 160 β
- Storage utilization: all under 2% β
- Cleanup Mode: AUTO β
- Capture Mode: AUTO β
No changes required for these databases' Query Store configuration.
5. Database Integrity Maintenance (CHECKDB)
All 7 user databases show a last CHECKDB date of January 1, 1900, indicating integrity checks have never been recorded. This is a critical operational risk β undetected corruption could lead to unrecoverable data loss, especially combined with the lack of valid backups.
Affected Databases
| Database | Last CHECKDB | Risk Level |
|---|---|---|
| DigitsSolver | Never | Critical |
| WideWorldImporters | Never | Critical |
| SQLStorm | Never | Critical |
| tpcc | Never | Critical |
| tpch | Never | Critical |
| CRM | Never | Critical |
| tpch10 | Never | Critical |
Immediate Action: Run CHECKDB on All Databases
-- Run CHECKDB with DATA_PURITY to also check column value integrity
-- Run during low-activity period. NOLOCK reduces blocking but may miss
-- some consistency checks β use WITH NO_INFOMSGS to reduce output noise.
DBCC CHECKDB (DigitsSolver) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (WideWorldImporters) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (SQLStorm) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (tpcc) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (tpch) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (CRM) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
DBCC CHECKDB (tpch10) WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY;
Recommended Ongoing Schedule
Use SQL Server Agent to schedule weekly CHECKDB runs. For larger databases, consider CHECKFILEGROUP or CHECKTABLE approaches to distribute the workload.
-- SQL Server Agent Job Step example for weekly CHECKDB
-- (Create via SSMS or T-SQL; shown here as T-SQL logic)
-- Simple scheduled job step:
EXEC msdb.dbo.sp_add_job
@job_name = N'Weekly_CHECKDB_AllUserDBs';
EXEC msdb.dbo.sp_add_jobstep
@job_name = N'Weekly_CHECKDB_AllUserDBs',
@step_name = N'Run CHECKDB',
@subsystem = N'TSQL',
@command = N'
DECLARE @db_name NVARCHAR(128);
DECLARE db_cursor CURSOR FOR
SELECT name FROM sys.databases
WHERE database_id > 4
AND state_desc = ''ONLINE''
AND is_read_only = 0;
OPEN db_cursor;
FETCH NEXT FROM db_cursor INTO @db_name;
WHILE @@FETCH_STATUS = 0
BEGIN
EXEC (''DBCC CHECKDB ('' + QUOTENAME(@db_name)
+ '') WITH NO_INFOMSGS, ALL_ERRORMSGS, DATA_PURITY'');
FETCH NEXT FROM db_cursor INTO @db_name;
END
CLOSE db_cursor;
DEALLOCATE db_cursor;
';
EXEC msdb.dbo.sp_add_schedule
@schedule_name = N'Weekly_Sunday_2AM',
@freq_type = 8,
@freq_interval = 1, -- Sunday
@active_start_time = 020000;
EXEC msdb.dbo.sp_attach_schedule
@job_name = N'Weekly_CHECKDB_AllUserDBs',
@schedule_name = N'Weekly_Sunday_2AM';
EXEC msdb.dbo.sp_add_jobserver
@job_name = N'Weekly_CHECKDB_AllUserDBs';
6. Security Enhancements
One unencrypted connection was detected: NT SERVICE\SQLTELEMETRY via Shared Memory transport (local machine only). Per analysis guidelines:
- This is a shared memory transport β data never leaves the local machine kernel and cannot be intercepted over a network.
- The login is
NT SERVICE\SQLTELEMETRY(SQL Server CEIP telemetry service) β no user data is exposed. - Risk assessment: Minimal / Acceptable. No action required for this specific connection.
Only one sysadmin member: RockyPC\k_a_f. This is a Windows-authenticated local user account, which is appropriate for a developer workstation. The number is minimal and follows least-privilege principles for a single-user dev machine.
- No excess sysadmin accounts detected.
- SA account status and SQL logins not detailed in the data β verify SA account is disabled if not needed.
-- Verify SA account is disabled (best practice if not using SQL auth)
SELECT name, is_disabled
FROM sys.sql_logins
WHERE name = 'sa';
-- Disable SA if not needed
-- ALTER LOGIN sa DISABLE;
Integrated Security Only = No β SQL Server is running in Mixed Authentication Mode (both SQL and Windows logins accepted). For a developer machine this is commonly needed, but consider:
- Ensure all SQL logins have strong passwords.
- Review and disable any unused SQL logins.
- If only Windows auth is needed, switch to Windows Authentication Mode only.
-- Audit SQL logins (non-Windows)
SELECT name, is_disabled, create_date, modify_date,
LOGINPROPERTY(name, 'PasswordLastSetTime') AS pwd_last_set
FROM sys.sql_logins
ORDER BY name;
Currently both active connections are local (shared memory). If this server is ever accessed remotely or promoted to a shared dev/test environment, enforce TLS encryption:
- Install a server certificate (self-signed is acceptable for dev/test).
- Enable Force Encryption = Yes in SQL Server Configuration Manager β SQL Server Network Configuration β Protocols β Properties.
- Update all client connection strings with
Encrypt=True; TrustServerCertificate=Truefor dev/test, or install a trusted CA cert for production-like environments.
7. Concurrency Issues (Deadlocks, Locking Patterns)
As workload testing increases (SQLStorm, tpcc, tpch workloads), proactively configure deadlock monitoring via Extended Events to capture future events:
-- Create a lightweight deadlock capture session
CREATE EVENT SESSION [Deadlock_Monitor] ON SERVER
ADD EVENT sqlserver.xml_deadlock_report(
ACTION (sqlserver.database_name,
sqlserver.sql_text,
sqlserver.username)
)
ADD TARGET package0.ring_buffer(SET max_memory = 4096)
WITH (MAX_DISPATCH_LATENCY = 5 SECONDS);
ALTER EVENT SESSION [Deadlock_Monitor] ON SERVER STATE = START;
8. Operational Best Practices
The backup situation across this server is severely deficient. No recovery is possible for most databases in the event of data loss or corruption:
| Database | Recovery Model | Last Full Backup | Issue |
|---|---|---|---|
| master | SIMPLE | Never | No backup |
| model | FULL | Never | No backup, no log backups |
| msdb | SIMPLE | Never | No backup |
| DigitsSolver | FULL | Never | No backup, no log backups |
| WideWorldImporters | SIMPLE | Oct 2022 (1,283 days ago) | Severely outdated |
| SQLStorm | SIMPLE | Oct 2025 (169 days ago) | No recent backup |
| tpcc | FULL | Never | No backup, no log backups |
| tpch | FULL | Nov 2025 (141 days ago) | No log backups |
| CRM | FULL | Never | No backup, no log backups |
| tpch10 | FULL | Never | No backup, no log backups |
Immediate: Take Full Backups of All Databases
-- Take immediate full backups (with compression enabled after sp_configure change above)
BACKUP DATABASE [master]
TO DISK = N'C:\Backups\master_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [model]
TO DISK = N'C:\Backups\model_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [msdb]
TO DISK = N'C:\Backups\msdb_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [DigitsSolver]
TO DISK = N'C:\Backups\DigitsSolver_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [WideWorldImporters]
TO DISK = N'C:\Backups\WideWorldImporters_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [SQLStorm]
TO DISK = N'C:\Backups\SQLStorm_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [tpcc]
TO DISK = N'C:\Backups\tpcc_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [tpch]
TO DISK = N'C:\Backups\tpch_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [CRM]
TO DISK = N'C:\Backups\CRM_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
BACKUP DATABASE [tpch10]
TO DISK = N'C:\Backups\tpch10_full.bak'
WITH COMPRESSION, CHECKSUM, STATS = 10;
Address Recovery Model / Log Backup Mismatch
Databases in FULL recovery model without log backups have log files that grow continuously and cannot be truncated. Either implement log backups or switch to SIMPLE recovery (if point-in-time recovery is not needed):
-- Option A: Switch to SIMPLE recovery for dev/test databases
-- (choose this if point-in-time recovery is not required)
ALTER DATABASE DigitsSolver SET RECOVERY SIMPLE;
ALTER DATABASE tpcc SET RECOVERY SIMPLE;
ALTER DATABASE CRM SET RECOVERY SIMPLE;
ALTER DATABASE tpch10 SET RECOVERY SIMPLE;
-- Note: After taking a full backup first
-- Option B: Keep FULL recovery and add log backups
-- (choose this if you need point-in-time recovery)
BACKUP LOG [DigitsSolver]
TO DISK = N'C:\Backups\DigitsSolver_log.bak'
WITH COMPRESSION, CHECKSUM;
-- Then schedule regular log backups (every 15-60 minutes)
Running SQL Server 2022 RTM-GDR (KB5077465, 16.0.1170.5). This is a GDR (General Distribution Release / security-only patching branch). As of the analysis date, SQL Server 2022 Cumulative Updates are available through CU16+. The current CU branch includes stability, performance, and security fixes beyond the GDR track.
- Visit Microsoft SQL Server 2022 Build Versions to identify the latest CU.
- Test on dev machine (this server) before applying to production equivalents.
- For a developer workstation, updating to the latest CU is lower risk and recommended.
System database files (master.mdf, mastlog.ldf, MSDBData.mdf, MSDBLog.ldf, tempdb, model) are located at C:\Data\MSSQL16.MSSQLSERVER\MSSQL\DATA\ which is a non-default path (default would be the SQL Server install directory). This is acceptable if intentionally configured, but ensure this path is included in backup scope and has appropriate access permissions.
9. Proactive Maintenance Suggestions
No evidence of any SQL Server Agent maintenance jobs. Implement a comprehensive maintenance schedule:
| Task | Frequency | Notes |
|---|---|---|
| DBCC CHECKDB (all user DBs) | Weekly (Sunday 2:00 AM) | Critical β currently never run |
| Full Database Backup (all DBs) | Weekly (Saturday 11:00 PM) | Include compression + checksum |
| Differential Backup (key DBs) | Daily (11:00 PM) | For SQLStorm, CRM, DigitsSolver |
| Log Backup (FULL recovery DBs) | Every 30-60 minutes | If keeping FULL recovery model |
| Index Reorganize (>10% frag) | Weekly | After CHECKDB baseline established |
| Index Rebuild (>30% frag) | Weekly | ONLINE=ON available in Developer Ed |
| Update Statistics | Weekly (or after index rebuild) | FULLSCAN on smaller tables |
| Clean up backup history (msdb) | Monthly | sp_delete_backuphistory |
tempdb has 8 data files (temp2βtemp8 + tempdev), which aligns with the recommended practice of matching logical CPU count (up to 8 files for CPUs >8). Current tempdb I/O is extremely low. No changes required. If heavy tempdb workloads develop from TPC benchmarks, consider enabling memory-optimized tempdb metadata (Section 2).
With all databases at compatibility level 160 (except WideWorldImporters), the following SQL Server 2022 IQP features are available that can benefit TPC workloads and analytical queries:
- Parameter Sensitive Plan (PSP) Optimization: Automatically creates multiple plans for queries with parameter skew (beneficial for tpcc/tpch workloads).
- Degree of Parallelism (DOP) Feedback: Automatically adjusts parallelism for individual queries based on observed performance (beneficial on the 16-core machine).
- Memory Grant Feedback (persistent, via Query Store): Persists memory grant adjustments across restarts.
- Cardinality Estimation Feedback: Corrects CE model errors over time.
These features are automatically active at compatibility level 160 with Query Store enabled. No additional configuration is needed β they will engage as workloads run and Query Store accumulates data.
Database Mail XPs = 0. Enabling Database Mail and configuring SQL Server Agent alerts for critical conditions (severity 16+, severity 17+, specific error numbers) provides proactive notification of issues:
- Backup failure alerts
- CHECKDB failure / corruption detected
- Disk space warnings
- SQL Server error log severity 17β25 events
This is optional for a developer workstation but useful if the machine runs unattended for extended periods.