Executive Summary
RockyPC is a single-instance SQL Server 2022 Developer Edition running on a Windows 10 Home hypervisor with a diverse set of databases (CRM, DigitsSolver, SQLStorm, tpcc, tpch, tpch10, WideWorldImporters). The server has been running for only ~1 day, which limits some statistics, but several significant operational, integrity, and backup concerns are already clearly visible.
The most urgent concerns are: six user databases have never had CHECKDB run (reporting epoch date 1900-01-01), multiple databases have no full backup on record (including CRM, tpcc, tpch10, and all three system databases), and several databases in FULL recovery model have no log backups, meaning transaction logs will grow without bound. I/O performance is excellent (all latencies ≤1 ms average), wait statistics are dominated by expected background sleeps, and no deadlocks or missing indexes are detected — the server is performing well workload-wise. Configuration improvements such as enabling optimize for ad hoc workloads, enabling backup compression, enabling backup checksum, and tuning Query Store statistics collection intervals are quick wins. Memory configuration is conservative and should be reviewed given the available 32 GB of physical RAM.
🔥 Top Priorities at a Glance
-
Critical #1 — Run DBCC CHECKDB on 6 Databases ImmediatelyWideWorldImporters, SQLStorm, tpcc, tpch, CRM, and tpch10 show a last-run date of
1900-01-01, indicating CHECKDB has never been executed. Undetected corruption is a data-loss risk. Confidence: Very High (99%) -
Critical #2 — Establish Full Backup Strategy for All DatabasesCRM, tpcc, tpch10, DigitsSolver, master, model, and msdb have never been backed up. WideWorldImporters last backup was 1,284 days ago. Data loss risk is severe. Confidence: Very High (99%)
-
Critical #3 — Fix Log Backup Gap on FULL Recovery DatabasesDigitsSolver, tpcc, tpch, CRM, tpch10, and model are in FULL recovery with no log backups. Transaction logs will grow indefinitely and point-in-time recovery is impossible. Confidence: Very High (99%)
-
High #4 — Enable Optimize for Ad Hoc WorkloadsCurrently disabled. This prevents single-use plan stubs from being cached, reducing plan cache memory pressure. Confidence: High (95%)
-
High #5 — Enable Backup Compression and Checksum by DefaultBoth are disabled. Backup compression reduces backup size and duration; backup checksum enables early corruption detection. Confidence: High (95%)
1. Performance Improvements (CPU, Memory, I/O)
-
Medium Review Max Server Memory Setting
Max server memory is set to 23,168 MB (~71% of 32 GB physical RAM) on a system with only 452 MB committed out of 23,168 MB target. This indicates very light current activity. The
min server memoryrunning value of 16 MB is fine. However, with 16 logical CPUs and multiple large databases (tpch, tpch10, WideWorldImporters), SQL Server may benefit from additional memory during analytical workloads. The current cap of 23,168 MB reserves ~9 GB for the OS, which is generally appropriate for Windows 10, but verify OS needs.- Current committed memory: 452 MB — no immediate pressure.
- Max server memory of 23,168 MB is reasonable given 32 GB physical, but re-evaluate if workload grows.
- Consider raising to 26,000–27,000 MB if OS stability allows.
-
Low I/O Performance Is Excellent — No Action Required
All data and log file average latencies are ≤1 ms (data files) and 3–7 ms (log reads). These are well within best-practice thresholds (<10 ms data, <5 ms log for writes). Log write latencies are effectively 0 ms indicating fast storage. The total I/O stalls are minimal given the activity levels.
✓ I/O subsystem is healthy. No tuning action needed at this time.Confidence: High (90%) -
Low Wait Statistics Are Benign — Background Waits Only
Both top wait types (
QDS_PERSIST_TASK_MAIN_LOOP_SLEEPandPWAIT_DIRECTLOGCONSUMER_GETNEXT) are background infrastructure sleeps, each representing ~50% of total waits — this is normal and expected behavior. These are Query Store persistence and log consumer background threads sleeping between work intervals. No user-query waits (CXPACKET, LCK_M_*, PAGEIOLATCH, SOS_SCHEDULER_YIELD) are detected.✓ No actionable wait-based performance bottleneck detected.Confidence: High (92%) -
Medium Enable tempdb Metadata Memory-Optimized Tables
SQL Server 2019+ supports memory-optimized tempdb metadata, which can significantly reduce system table contention on busy servers. Currently disabled. With 16 CPUs this feature can improve scalability if tempdb system table contention ever becomes an issue.
Confidence: Medium (65%) — low current tempdb activity, proactive benefit-- Enable memory-optimized tempdb metadata (requires restart) ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON; -- Verify: SERVERPROPERTY('IsTempdbMetadataMemoryOptimized') -
Medium Verify MAXDOP Configuration for Analytical Workloads
MAXDOP is set to 8 on a 16-logical-CPU machine. This is a valid configuration for mixed workloads, but for the analytical TPC-H and TPC-CH benchmark databases (tpch, tpch10), allowing higher parallelism may improve query performance. The cost threshold for parallelism at 30 is a reasonable starting point above the default of 5.
- Current MAXDOP: 8 (acceptable for OLTP-dominant mix)
- For analytical databases, consider query-level MAXDOP hints or Resource Governor to allow higher DOP.
- Cost threshold of 30 is appropriate for a mixed workload; consider 40–50 if plan cache shows many parallel plans.
2. Configuration Optimizations
-
High Enable Optimize for Ad Hoc Workloads
Currently disabled. This option causes SQL Server to store a small stub the first time an ad hoc query executes rather than the full plan, saving significant plan cache memory for single-use queries. This is a safe, always-recommended setting.
Confidence: Very High (98%) — universally recommended best practiceEXEC sp_configure 'optimize for ad hoc workloads', 1; RECONFIGURE; -
High Enable Backup Compression and Backup Checksum by Default
Both
backup compression default(0) andbackup checksum default(0) are disabled. Enabling compression reduces backup file size and duration; enabling checksum validates data integrity during backup, allowing earlier detection of corruption.
Confidence: Very High (97%)EXEC sp_configure 'backup compression default', 1; RECONFIGURE; EXEC sp_configure 'backup checksum default', 1; RECONFIGURE; -
Medium Enable Blocked Process Threshold Monitoring
The
blocked process thresholdis set to 0 (disabled). Setting it to 5–10 seconds enables the blocked process report event in Extended Events / SQL Trace, which is invaluable for diagnosing blocking chains in OLTP workloads like tpcc.
Confidence: High (88%)EXEC sp_configure 'blocked process threshold (s)', 5; RECONFIGURE; -
Medium Set Min Server Memory to a Reasonable Floor
The configured
min server memoryis 0 MB (running: 16 MB). On a dedicated SQL Server, setting a minimum floor prevents the OS from aggressively trimming the buffer pool during low-activity periods, reducing plan/buffer cache churn after periods of inactivity.
Confidence: Medium (72%)-- Set min server memory to ~2 GB for a lightly loaded dedicated server EXEC sp_configure 'min server memory (MB)', 2048; RECONFIGURE; -
Low Upgrade to Latest SQL Server 2022 CU (KB5077465 is a GDR)
The server is running 16.0.1170.5 (RTM-GDR), which is the General Distribution Release security patch, not the latest Cumulative Update. CUs include bug fixes, stability improvements, and IQP enhancements beyond GDR patches. For SQL Server 2022, the latest CU should be applied.
- Current: RTM-GDR (KB5077465) — security-focused patch only
- Recommended: Apply the latest SQL Server 2022 CU for full bug fixes and performance improvements
- Check: Microsoft SQL Server Latest Updates
-
Low Review WideWorldImporters Compatibility Level (130 vs 160)
WideWorldImporters is running at compatibility level 130 (SQL Server 2016), while all other user databases are at 160 (SQL Server 2022). This database misses out on Intelligent Query Processing, Parameter Sensitive Plan optimization, DOP Feedback, and other SQL 2022 engine improvements.
Confidence: High (85%) — functional testing required before changing-- Test thoroughly in a non-production environment first ALTER DATABASE WideWorldImporters SET COMPATIBILITY_LEVEL = 160; -- Verify Query Store captures any plan regressions after change
3. Index Optimization Opportunities
-
Low Schedule Index Analysis After Representative Workload Accumulates
With only 1 day of uptime, missing index and index usage statistics are not yet representative. Re-run a full index analysis using the Index Tuning goal in AI SQL Tuner after at least 7 days of typical workload for each database, particularly for SQLStorm (highest I/O), WideWorldImporters, tpch, and tpch10.
- Priority databases for future index review: SQLStorm, WideWorldImporters, tpcc, CRM
- Analytical databases (tpch, tpch10): consider columnstore indexes for aggregate query patterns (Enterprise feature available in Developer Edition)
-
Low Consider Columnstore Indexes for TPC-H Analytical Databases
The tpch and tpch10 databases represent analytical workloads. SQL Server 2022 Developer Edition (Enterprise engine) supports clustered and non-clustered columnstore indexes, which can provide 10–100x query acceleration for aggregation-heavy analytical queries.
- Evaluate adding non-clustered columnstore indexes on large fact tables in tpch and tpch10
- Use Query Store to compare plan performance before and after
- Batch mode processing is automatically leveraged with columnstore
4. Query Store Optimization
Query Store is enabled and READ_WRITE on all user databases — this is excellent. Several configuration inconsistencies exist between WideWorldImporters and the other databases.
| Database | Flush Interval | Stats Interval | Max Plans/Query | Storage Used | Status |
|---|---|---|---|---|---|
| CRM | 15 min | 60 min | 200 | 0.10% | ✓ OK |
| DigitsSolver | 15 min | 60 min | 200 | 0.10% | ✓ OK |
| SQLStorm | 15 min | 60 min | 200 | 1.40% | ✓ OK |
| tpcc | 15 min | 60 min | 200 | 0.10% | ✓ OK |
| tpch | 15 min | 60 min | 200 | 0.10% | ✓ OK |
| tpch10 | 15 min | 60 min | 200 | 0.10% | ✓ OK |
| WideWorldImporters | 50 min ⚠️ | 15 min ⚠️ | 1,000 ⚠️ | 0.40% | ⚠️ Review |
-
Medium Standardize WideWorldImporters Query Store Settings
WideWorldImporters has non-standard Query Store settings: a 50-minute flush interval (vs 15 min standard), 15-minute stats interval (vs 60 min), and 1,000 max plans per query (vs 200). These may be intentional but should be reviewed for consistency. A 50-minute flush interval increases data loss risk on unexpected restart; 1,000 plans per query can consume significant storage for high-variation queries.
Confidence: Medium (75%) — verify if non-standard settings are intentional-- Standardize WideWorldImporters Query Store settings ALTER DATABASE WideWorldImporters SET QUERY_STORE ( DATA_FLUSH_INTERVAL_SECONDS = 900, -- 15 minutes INTERVAL_LENGTH_MINUTES = 60, MAX_PLANS_PER_QUERY = 200, QUERY_CAPTURE_MODE = AUTO, SIZE_BASED_CLEANUP_MODE = AUTO, STALE_QUERY_THRESHOLD_DAYS = 30 ); -
Low Leverage SQL Server 2022 IQP Features via Query Store
With compatibility level 160 and Query Store enabled, databases automatically benefit from:
- Parameter Sensitive Plan (PSP) optimization — multiple plans for skewed parameter distributions
- Degree of Parallelism Feedback — automatic MAXDOP adjustment based on observed performance
- Memory Grant Feedback — reduces over/under-grant memory issues
- Cardinality Estimation Feedback — improves estimates over time
No action required — these are active automatically. Monitor Query Store for plan force overrides that may be needed.
Confidence: High (90%)
5. Database Integrity Maintenance (CHECKDB)
| Database | Last CHECKDB | Days Since | Status |
|---|---|---|---|
| DigitsSolver | Apr 12 2026 6:33 PM | 1 | ✓ OK |
| WideWorldImporters | Jan 1 1900 (never) | 46,123 | 🚨 Critical |
| SQLStorm | Jan 1 1900 (never) | 46,123 | 🚨 Critical |
| tpcc | Jan 1 1900 (never) | 46,123 | 🚨 Critical |
| tpch | Jan 1 1900 (never) | 46,123 | 🚨 Critical |
| CRM | Jan 1 1900 (never) | 46,123 | 🚨 Critical |
| tpch10 | Jan 1 1900 (never) | 46,123 | 🚨 Critical |
-
Critical Run DBCC CHECKDB Immediately on All 6 Affected Databases
Schedule and execute CHECKDB on all six databases. For large databases (tpch10, WideWorldImporters), use
PHYSICAL_ONLYfor a faster first pass, then schedule a full check.
Confidence: Very High (99%)-- Quick physical check first (faster for large databases) DBCC CHECKDB ('WideWorldImporters') WITH PHYSICAL_ONLY, NO_INFOMSGS; DBCC CHECKDB ('SQLStorm') WITH PHYSICAL_ONLY, NO_INFOMSGS; DBCC CHECKDB ('tpcc') WITH PHYSICAL_ONLY, NO_INFOMSGS; DBCC CHECKDB ('tpch') WITH PHYSICAL_ONLY, NO_INFOMSGS; DBCC CHECKDB ('CRM') WITH PHYSICAL_ONLY, NO_INFOMSGS; DBCC CHECKDB ('tpch10') WITH PHYSICAL_ONLY, NO_INFOMSGS; -- Follow up with full logical check (can be scheduled during off-peak) DBCC CHECKDB ('WideWorldImporters') WITH NO_INFOMSGS, ALL_ERRORMSGS; -- (repeat for each database) -
Critical Establish a Weekly CHECKDB Maintenance Schedule
Implement a SQL Server Agent job or Ola Hallengren's maintenance solution to run CHECKDB weekly on all user databases, with results logged to a maintenance log table.
Confidence: Very High (99%)-- Example: Weekly CHECKDB using SQL Agent (run per database) USE msdb; EXEC sp_add_job @job_name = N'Weekly CHECKDB - All Databases'; -- Configure steps to run DBCC CHECKDB for each user database -- Schedule: Weekly, Sunday 2:00 AM (or low-activity window) -- Recommended: Ola Hallengren's solution -- https://ola.hallengren.com/sql-server-integrity-check.html -- DatabaseIntegrityCheck @Databases = 'USER_DATABASES', -- @CheckCommands = 'CHECKDB';
6. Security Enhancements
-
Low Unencrypted Telemetry Connection — Minimal Risk (Informational)
The
NT SERVICE\SQLTELEMETRYconnection (SQLServerCEIP) is unencrypted and uses shared memory transport. Per the analysis instructions, unencrypted telemetry connections are minimal risk as no user data is exposed, and shared memory connections are local-only (no network exposure). No action required.✓ Telemetry connection via shared memory — local only, no user data. Acceptable.Confidence: High (92%) -
Low Sysadmin Membership Is Minimal — Appropriate
Only RockyPC\k_a_f holds the sysadmin role. This is an excellent least-privilege posture. As the server hosts multiple databases, continue ensuring application logins use database-specific roles (db_datareader, db_datawriter, or custom roles) rather than sysadmin.
✓ 1 sysadmin member — minimal attack surface. Good practice maintained.Confidence: High (90%) -
Medium Mixed Authentication Mode Enabled — Review if Needed
Integrated Security Onlyis No, meaning SQL Authentication is enabled alongside Windows Authentication. If no SQL logins are actively used, consider switching to Windows-only authentication to reduce the attack surface (especially important if this instance becomes network-accessible).
Confidence: Medium (75%)-- Check for active SQL logins SELECT name, type_desc, is_disabled FROM sys.server_principals WHERE type = 'S' -- SQL logins AND name NOT IN ('sa', '##MS_PolicyEventProcessingLogin##', '##MS_PolicyTsqlExecutionLogin##') AND is_disabled = 0; -- If none needed, switch to Windows-only auth -- (requires SQL Server restart, done via SSMS Server Properties > Security) -
Medium Verify 'sa' Login is Disabled and Renamed
As a security best practice, the
saaccount should be disabled and renamed to reduce brute-force risk. Given mixed-auth mode is enabled, this is especially important.
Confidence: High (85%)-- Check sa login status SELECT name, is_disabled FROM sys.server_principals WHERE name = 'sa'; -- Disable sa if not used ALTER LOGIN [sa] DISABLE; -- Or rename and disable for defense in depth ALTER LOGIN [sa] WITH NAME = [sa_disabled]; ALTER LOGIN [sa_disabled] DISABLE; -
Medium Developer Edition Not Licensed for Production — Verify Usage
SQL Server Developer Edition has all Enterprise features but is licensed for development and testing only — not production use. If this server hosts any production data or serves production applications, it must be upgraded to a production-licensed edition (Standard or Enterprise).
⚠️ Confirm this server is development/test only. License compliance is a legal and business risk.Confidence: Very High (95%) — licensing terms are clear
7. Concurrency Issues (Deadlocks, Locking)
-
Low Enable Read Committed Snapshot Isolation (RCSI) for OLTP Databases
For OLTP databases like tpcc and CRM, Read Committed Snapshot Isolation (RCSI) eliminates reader-writer blocking by using row versioning. This can dramatically reduce contention without application code changes. WideWorldImporters typically ships with snapshot isolation enabled.
Confidence: Medium (72%) — verify application compatibility first-- Check current isolation settings SELECT name, is_read_committed_snapshot_on, snapshot_isolation_state_desc FROM sys.databases WHERE name IN ('tpcc', 'CRM', 'SQLStorm'); -- Enable RCSI (requires brief database single-user mode or tempdb version store growth) -- Enable during low-activity window ALTER DATABASE tpcc SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE; ALTER DATABASE CRM SET READ_COMMITTED_SNAPSHOT ON WITH ROLLBACK IMMEDIATE; -
Low Set Up Extended Events Session for Blocking Monitoring
With the blocked process threshold set to 5 seconds (recommended above), create an Extended Events session to capture blocking chains for proactive monitoring, especially for the tpcc OLTP workload.
Confidence: High (85%)-- Create lightweight blocking monitor session CREATE EVENT SESSION [BlockedProcessMonitor] ON SERVER ADD EVENT sqlserver.blocked_process_report ADD TARGET package0.ring_buffer(SET max_memory = 51200) WITH (MAX_DISPATCH_LATENCY = 5 SECONDS); ALTER EVENT SESSION [BlockedProcessMonitor] ON SERVER STATE = START;
8. Backup & Recovery
| Database | Recovery Model | Last Full Backup | Log Backups | Risk Level |
|---|---|---|---|---|
| master | SIMPLE | Never | N/A | 🚨 Critical |
| model | FULL | Never | None | 🚨 Critical |
| msdb | SIMPLE | Never | N/A | 🚨 Critical |
| DigitsSolver | FULL | Never | None | 🚨 Critical |
| CRM | FULL | Never | None | 🚨 Critical |
| tpcc | FULL | Never | None | 🚨 Critical |
| tpch10 | FULL | Never | None | 🚨 Critical |
| tpch | FULL | 142 days ago | None | High |
| SQLStorm | SIMPLE | 170 days ago | N/A | High |
| WideWorldImporters | SIMPLE | 1,284 days ago | N/A | 🚨 Critical |
-
Critical Take Immediate Full Backups of All Databases
Confidence: Very High (99%)-- Immediate full backup of all user and system databases BACKUP DATABASE [master] TO DISK = N'C:\Backups\master_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [model] TO DISK = N'C:\Backups\model_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [msdb] TO DISK = N'C:\Backups\msdb_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [DigitsSolver] TO DISK = N'C:\Backups\DigitsSolver_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [CRM] TO DISK = N'C:\Backups\CRM_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [tpcc] TO DISK = N'C:\Backups\tpcc_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [tpch] TO DISK = N'C:\Backups\tpch_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [tpch10] TO DISK = N'C:\Backups\tpch10_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [WideWorldImporters] TO DISK = N'C:\Backups\WideWorldImporters_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP DATABASE [SQLStorm] TO DISK = N'C:\Backups\SQLStorm_full.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; -
Critical Implement Log Backup Schedule for FULL Recovery Databases
Databases in FULL recovery model must have regular log backups to maintain the log backup chain, enable point-in-time recovery, and allow transaction log space reuse. Without log backups, the transaction log will grow indefinitely.
-- Take log backups for FULL recovery databases (after full backup exists) -- Run every 15-60 minutes via SQL Agent job BACKUP LOG [DigitsSolver] TO DISK = N'C:\Backups\DigitsSolver_log.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP LOG [CRM] TO DISK = N'C:\Backups\CRM_log.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP LOG [tpcc] TO DISK = N'C:\Backups\tpcc_log.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP LOG [tpch] TO DISK = N'C:\Backups\tpch_log.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP LOG [tpch10] TO DISK = N'C:\Backups\tpch10_log.bak' WITH COMPRESSION, CHECKSUM, STATS = 10; BACKUP LOG [model] TO DISK = N'C:\Backups\model_log.bak' WITH COMPRESSION, CHECKSUM, STATS = 10;💡 Alternative: If these are purely test/benchmark databases with no point-in-time recovery requirement, consider switching toConfidence: Very High (98%)SIMPLErecovery model to avoid log growth without backups.-- Only if no PITR is needed and data is expendable/reproducible ALTER DATABASE tpch SET RECOVERY SIMPLE; ALTER DATABASE tpch10 SET RECOVERY SIMPLE; -- (shrink log after switching) DBCC SHRINKFILE (N'tpch_log', 64); -
High Establish Automated Backup Schedule via SQL Server Agent
Implement a comprehensive backup strategy using SQL Server Agent jobs or Ola Hallengren's DatabaseBackup solution (recommended):
- Full backups: Weekly (Sunday, off-hours)
- Differential backups: Daily (except full backup day)
- Log backups: Every 15–30 minutes for FULL recovery databases
- Backup verification: Weekly RESTORE VERIFYONLY on backup files
- Backup retention: Minimum 14 days; copy backups offsite or to Azure Blob Storage
Confidence: Very High (98%)-- Recommended: Ola Hallengren's backup solution -- https://ola.hallengren.com/sql-server-backup.html -- Example DatabaseBackup call for full backups EXECUTE dbo.DatabaseBackup @Databases = 'USER_DATABASES', @BackupType = 'FULL', @Directory = N'C:\Backups', @Compress = 'Y', @Verify = 'Y', @CheckSum = 'Y', @CleanupTime = 336; -- Keep 14 days
9. Operational Best Practices
-
Medium Move System Database Files Off Data Drive (if Applicable)
System databases (master, model, msdb, tempdb) share the
Confidence: Low (60%) — current I/O is healthy, low urgencyC:\Data\MSSQL16.MSSQLSERVER\MSSQL\DATA\path with user database data files on the sameC:\Data\volume. While I/O performance is currently excellent, mixing system and user databases on the same volume can complicate capacity management. If disk space becomes constrained, consider separating system databases to their own volume. -
Medium Enable Database Mail for Alerting
Database Mail XPs are disabled. Without email alerting, SQL Server Agent job failures, blocked processes, and integrity check errors will go unnoticed. Enable Database Mail and configure operator notifications for critical events.
Confidence: High (88%)-- Enable Database Mail EXEC sp_configure 'Database Mail XPs', 1; RECONFIGURE; -- Configure via SSMS: Management > Database Mail > Configure Database Mail -- Then set up SQL Agent Operators and Alerts for: -- Severity 16+ errors, job failures, and integrity check results -
Low Separate Log Files to a Dedicated Log Drive
Transaction logs (*.ldf) are on
C:\Logs\— separate from data files onC:\Data\. This is good practice and is already implemented. Ensure the Logs volume has sufficient free space to accommodate log growth, particularly for FULL recovery databases pending log backup setup.✓ Data and log files are on separate volumes — best practice maintained.Confidence: High (90%) -
Low Collation Consideration for New Databases
Server collation is
Confidence: Low (55%) — informational for new database creation onlySQL_Latin1_General_CP1_CI_AS, which is the legacy default. For new databases on SQL Server 2022, consider usingLatin1_General_100_CI_AS_SC_UTF8or another modern collation for better Unicode support. Existing databases should retain their current collation unless a migration is planned.
10. Proactive Maintenance Suggestions
-
Medium Implement Comprehensive Index Maintenance Schedule
Establish automated index maintenance (rebuild/reorganize) and statistics updates. Ola Hallengren's IndexOptimize is the industry standard:
- Rebuild indexes with fragmentation >30%
- Reorganize indexes with fragmentation 10–30%
- Update statistics on tables not covered by index rebuild
- Schedule weekly during maintenance window
Confidence: High (88%)-- Ola Hallengren IndexOptimize -- https://ola.hallengren.com/sql-server-index-and-statistics-maintenance.html EXECUTE dbo.IndexOptimize @Databases = 'USER_DATABASES', @FragmentationLow = NULL, @FragmentationMedium = 'INDEX_REORGANIZE,INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationHigh = 'INDEX_REBUILD_ONLINE,INDEX_REBUILD_OFFLINE', @FragmentationLevel1 = 10, @FragmentationLevel2 = 30, @UpdateStatistics = 'ALL', @OnlyModifiedStatistics = 'Y'; -
Medium Monitor tempdb Usage as Workload Grows
tempdb has 8 data files (tempdev + temp2–temp8) which matches a standard recommendation of number of files = min(logical CPUs, 8). This is already correctly configured. Monitor for tempdb version store growth if RCSI is enabled on OLTP databases.
-- Monitor tempdb version store usage SELECT SUM(version_store_reserved_page_count) * 8.0 / 1024 AS version_store_MB, SUM(user_object_reserved_page_count) * 8.0 / 1024 AS user_objects_MB, SUM(internal_object_reserved_page_count) * 8.0 / 1024 AS internal_objects_MB FROM sys.dm_db_file_space_usage;✓ tempdb file count (8) is appropriate for 16 logical CPUs.Confidence: High (87%) -
Low Enable SQL Server Audit for Sysadmin and DDL Activity
As a developer instance with multiple databases, consider enabling SQL Server Audit to track schema changes, permission escalations, and login activity. This provides an audit trail for compliance and troubleshooting.
Confidence: Medium (72%)-- Create server audit CREATE SERVER AUDIT [SecurityAudit] TO FILE (FILEPATH = N'C:\Logs\Audit\') WITH (ON_FAILURE = CONTINUE); ALTER SERVER AUDIT [SecurityAudit] WITH (STATE = ON); -- Create server audit specification for privileged activity CREATE SERVER AUDIT SPECIFICATION [PrivilegedActivity] FOR SERVER AUDIT [SecurityAudit] ADD (SERVER_ROLE_MEMBER_CHANGE_GROUP), ADD (SERVER_PERMISSION_CHANGE_GROUP), ADD (SUCCESSFUL_LOGIN_GROUP), ADD (FAILED_LOGIN_GROUP) WITH (STATE = ON); -
Low Re-Run Full Server Health Analysis After 7+ Days of Workload
Many DMV-based metrics (missing indexes, wait statistics, plan cache patterns, I/O patterns) are most informative after sustained representative workload. With only 1 day of uptime, the current baseline is limited. Schedule a full re-analysis after at least 7 days to:
- Identify missing index opportunities from accumulated DMV data
- Review wait statistics for user-query bottlenecks
- Assess plan cache efficiency post "optimize for ad hoc workloads" change
- Validate backup and CHECKDB job execution success
-
Low Configure Disk-Level Monitoring and Alerting
All databases reside on
C:\Data\andC:\Logs\. Monitor disk free space and set alerts at 20% and 10% thresholds. Transaction log growth from unbackedup FULL recovery databases (tpcc, tpch, etc.) can rapidly consume disk space.
Confidence: High (88%)-- Check drive free space EXEC xp_fixeddrives; -- Or use WMI/PowerShell-based monitoring -- Set alerts when C:\ drops below 20% free capacity