Category Archives: Software

SQL Holmes: The Case of the Fist-fighting Log Readers

And the Case of the Fist-Fighting Log Readers

Every once in a blue moon, you run across a problem that no one has seen before. Sometimes you can’t find anything all. Sometimes, you can only find unanswered forum questions. Sometimes, you find the worst thing imaginable: a forum post with someone replying to their own question with “Nevermind. Fixed it.” (HOW DID YOU FIX IT YODARULES1971?!?)

We had a similar experience a while back. Allow me to take you through it.

It started with a single, innocuous alert from one of our SQL Server Replication distributors:

DESCRIPTION: Replication-Replication Transaction-Log Reader Subsystem: agent SOMESERVER-SomeDB-6 failed. The process could not execute 'sp_replcmds' on 'SOMESERVER'.

Note: If you don’t have SQL Server Alerts set up on your instances, then you really really should.

We had two databases in this instance that were replicated. One was chugging along just fine, the other was giving the old log reader chestnut:

The process could not execute 'sp_replcmds' on 'SOMESERVER'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011) Get help: http://help/MSSQL_REPL20011

Experience has taught me that the most likely issues are the following:
1. Somebody changed something (Troubleshooting 101)
2. The owners of the databases involved in replication are wrong (usually not ‘sa’)
3. A database trigger somewhere was trying to make a change cross-database.

“I saw some smoke coming from the barn publication.”  -SQL Replication Monitor

Number 1 had happened, as we had deployed schema changes just prior to everything going south. Two and three however, did not. We could see the error in the replication log. It was trying to parse a Primary Key change that we had made. There was nothing obviously wrong with the command, so we moved on.

And Then We Tried to Fix It

After a few hours trying to troubleshoot, restarting the log readers, crying for our mommies, yadda yadda, the call was made that we would just reinitialize the publication. Ye ole’ replication wreck-n-restart.

It didn’t work. The publication would not reinitialize, failing with the same log reader error as before. This make sense as the log reader is shared among all the publications on an instance. It just couldn’t get over the command that couldn’t be parsed.

With few options left, we did the no-no and reset the log reader.

EXEC sp_replrestart

This is an internal command, run on the publisher, that is used when you need to restore a transactionally replicated database. It basically resets the LSN on the distributor to the highest value on the publisher, for our purposes skipping over the LSN/command that was causing the log reader’s issues.

This is going to be my costume for Halloween next year.

And it worked. Mostly. Replication started to flow again, but we needed to re-initialize the subscribers properly. Re-init was necessary as we’d just skipped all transactions that had accumulated between the time the issue occurred and the current time.

Yes, But What Does It MEAN?!?!

So, we had “resolved” the issue, but we still had no idea what the “issue” really was. After we had righted the ship, applied all dropped indexes at the subscriber we circled back around to determine root cause and why no one else seemed to have any idea what would cause this.

We went to our Junior DBA (Google) and got a pretty narrow range of responses, including the ones I listed before. No one seemed to be having the same issue as us, even with similar errors. So, we had to start at the bottom and work to the top.

Combing through the log reader error logs, one error was different than the others:

Cannot find an object ID for the replication system table 'cdc.change_tables'. Verify that the system table exists and is accessible by querying it directly. If it does exist, stop and restart the Log Reader Agent; if it does not exist, drop and reconfigure replication. (Source: MSSQLServer, Error number: 18807)

Well, that’s different. CDC is enabled on this database and used on quite a few tables. We checked, and the table cdc.change_tables existed; we wouldn’t get very far in life without it. The next step was to look at the CDC error log which returns the last 64 errors encountered.

SELECT * FROM sys.dm_cdc_errors

And in the sys.dm_cdc_errors table, we found this error:

Log scan process failed in processing a ddl log record. Refer to previous errors in the current session to identify the cause and correct any associated problems.

That seems pretty familiar. It was preceded by the following three errors:

Invalid length parameter passed to the RIGHT function.
Log Scan process failed in processing log records. Refer to previous errors in the current session to identify the cause and correct any associated problems.
Log scan process failed in processing a ddl log record. Refer to previous errors in the current session to identify the cause and correct any associated problems.

Different CDC, but I could see why you would be confused.

This points pretty squarely to CDC as the culprit. It uses a log reader with replication to replicate commands for capturing in CDC. Apparently, if it gets hung up, replication panics and just starts punching itself in the face. It’s an interesting design choice.

Way #938,308,121 To Break Replication

A few weeks later, when deploying more schema changes the issue resurfaced. Replication dive-bombed, we got the sames alert and everything came to a halt. We simply went in and blew away CDC for the entire database. This may not be a choice for your environment, but we needed to nuke it from orbit, just be sure.

USE SomeDB
GO
EXEC sys.sp_cdc_disable_db
GO

And after executing sys.sp_cdc_disable_db, the issue went away. Success?

We haven’t re-implemented CDC on this database yet, as there are a lot more decisions to be made business-wise, so we don’t know if there’s something internally broken or what. At some point we will need to turn it back on, but when and to what degree (or fallout) isn’t clear.

What is clear is is that something in CDC goofed and it took replication down with it. Moral of the story is: If replication is having issues, make sure CDC isn’t having issues as well. Also, use CDC sparingly. Don’t just throw it on every table you have “just ’cause.”

-CJ Julius

5 (more) New Features in SQL Server 2017

FETCH NEXT 5

About a week ago I posted about the Top 5 Things I’m most excited to see in SQL Server 2017. As you may have noticed, I just focused on the Engine/Agent and not on anything else. To be fair (and I’m always fair to myself), most of the big exciting changes were there.

However, I did want to give the rest of the SQL Server features their due. To rectify that oversight, here’s 5 more new things in SQL Server 2017 that I’m excited about.

1. SSIS On Linux
A Linux Cluster.
Image Source: Visual Hunt

This is a bit of cheat, because I already went over SQL Server on Linux, but this I thought deserved special notice. Did you know you can run your SSIS packages from your Linux box with SQL Server now? You can.

Just pop in this little one-liner and you’re off to the races*.

$ dtexec /F \<package name \> /DE <protection password>

If push came to shove you could probably put that on a cron job should you want. You still need a Windows server to create and maintain the packages, but you can run them locally from the box if you’re trying to keep the family together for the kids.

* Certain Terms and Conditions may apply. See your dealer for details.

2. Machine Learning Services
Python!

This is a feature that kinda existed previously, but it was just called “R” Services. The big thing of note is that it now supports Python and the associated libraries. See previous post in this series to catch my sarcasm about Python not being included in the first place.

Thing to note about Machine Learning Services is that it’s not supported in-database on Linux. You can still do things like native scoring (PREDICT), but that’s just about the long and the short of it. Microsoft is making noises like they’re going to address this in the somewhat-near future.

3. SSIS Scale-out

This is a pretty neat feature that I hadn’t thought about before. What if you had several servers that potentially COULD handle an SSIS workload (in an HA scenario or something), but you didn’t want to always target the same instance. You know, spread the love around.

SQL Server 2017 allows you to set up a master on your main instance and then workers on the servers you want to be able to scale-out to. After a bit of setup on your worker machines you can then either target machines with specific packages or let SSIS decide. Check out this walkthrough for more.

You turn this feature on (assuming you’ve set it up properly) in the SSIS Catalog Properties.
Image Source: Microsoft

 

4. New SSRS Web portal

This is less a new feature, and more of a major revamp to something that already existed. The new Reporting Services default Web Portal is a lot snazzier and has some new things. You can customize branding the instance and even develop KPIs that are contextual to the folder you are currently viewing.

Purdy.
Image Source: Microsoft

 

5. MDS Performance Improvements

Master Data Services has had a rough life. Beginning life as far back as SQL Server 2008R2, this has been the red-headed stepchild of the SQL Server offerings. It started out, in my humble opinion, barely usable, unnecessarily complex and just feature-poor. I’m not alone in this opinion.

Subsequent releases have helped it, but even after several versions it still was pretty weak and only useful for very specific cases. MDS only really came into its own in 2016, but with some performance limitations.

Pictured: Master Data Services circa 2008R2.
Image Source: Visual Hunt

Edging ever closer to a more perfect product, SQL Server 2017 features some much needed performance optimization allowing it to stage millions of rows in a reasonable amount of time. It was painfully slow previously with only a few hundred thousand records.

Lastly, they fixed the slow UI movement when doing things like expanding folders on certain pages.

Honorable Mentions

Not any this time, unless you want to talk about SSAS object-level Security or DAX finally getting an IN operator. Those seem pretty useful.

The End?

That’s it for 2017. There are, of course, many many more changes and new features in SQL Server 2017, but I think 10 or so is good enough to give you a taste. There are changes all across the product and I encourage you to look them over yourself.

-CJ Julius

Top 5 New Features in SQL Server 2017 (that I care about)

New Year, New Database Engine

http://icons8.com/
This is either a database or a stack of licorice pancakes.

It’s finally here! A few days ago as of this writing, SQL Server 2017 was released for Windows, Ubuntu, RedHat and Docker. There are a lot of new things in SQL Server 2017 from Python support in Machine Learning* to better CLR security. But I thought I’d narrow down the list to changes that I’m most interested in.

1. sudo Your Way To A Better SQL Server

SQL SERVER IS ON LINUX NOW! No surprise this is the first thing on my list as I keep going on and on about it. But it’s really here!

Installation is a cinch, especially if you’re a Linux or Unix person. Curl the GPG keys, add the repository, apt-get (if you’re on a real Distro) the installer and run the setup. It’s really that easy.

If you want it even easier, then check out Docker. Slap it in and go!

All the main features are there, the engine (of course), agent, Full-text search, DB Mail, AD authentication, SQL command-line tools etc. Pretty much everything you need to get going on Linux even if you’re in a Windows-dominated environment.

2. It’s Like A Car That Repairs Itself

So this is a big one. With automatic tuning on, SQL Server can detect performance problems, recommend solutions and automatically fix (some) problems. There are two flavors of this, one in SQL Server 2017 and one in Azure; I’ll be talking about the one in SQL 2017 here.

Automatic plan choice correction is the main feature and it checks whether a plan has regressed in performance. If the feature is so enabled, it reverts to the old plan. As you can see below, a new plan was chosen (4) but it didn’t do so well. SQL 2017 reverted to the old plan (3) automatically and got most of the performance back.

Image Source: Microsoft

I’m sure that Microsoft will be expanding this feature in the future and we can expect to see more from this. Azure already has automatic Index tuning in place, so we’ll probably see that in the On-Prem version eventually.

3. Indexes That Start, Stop And Then Start Again

This is a feature I didn’t know I needed. Basically, it allows an online index rebuild that has stopped or failed (say, it ran out of disk space) to be resumed. The index build fails, you fix whatever made it fail, and then resume the rebuild, picking up from where it left off. The rebuild will be in the ‘PAUSED’ state until you’re ready to RESUME or ABORT it.

Code:
-- Start a Resumable index rebuild
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable]
REBUILD WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=60)

-- PAUSE the rebuild:
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable] PAUSE

/* If you'd like to resume either after a failure or because we paused it. This syntax will also cause the resume to wait 5 minutes and then kill all blockers if there are any. */
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable]
RESUME WITH (MAX_DURATION= 60 MINUTES,
WAIT_AT_LOW_PRIORITY (MAX_DURATION=5, ABORT_AFTER_WAIT=BLOCKERS))

/*Or if you just want to stop the whole thing because you hate unfragmented indexes */
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable] ABORT

This is also great if you want to pause a rebuild because it’s interfering with some process. You can PAUSE the rebuild, wait for the transaction(s) to be done and RESUME. Pretty neat.

4. Gettin’ TRIM
A cute Sheepy.
There hasn’t been a picture in a while and I was afraid you might be getting bored.

This one is kind of minor, but it excited me a lot because it was one of the first things I noticed as a DBA that made me say ‘Why don’t they have a function that does that?’ (note the single quotes). TRIM will trim a string down based on the parameters you provide.

If you provide no parameters it will just cut off all the spaces from both sides. It’s equivalent to writing RTRIM(LTRIM(‘ SomeString ‘))

Code:
SELECT TRIM( '.! ' FROM ' SomeString !!!') AS TrimmedString;

Output:
SomeString

5. Selecting Into The Right Groups

Another small important change. In previous versions of SQL Server, you could not SELECT INTO a specific Filegroup when creating a new table. Now you can, and it uses the familiar ON syntax to do it.

Code:
SELECT [SomeColumn] INTO [dbo].[SomeNewTable] ON [FileGroup] from [dbo].[SomeOriginalTable];

Also, you can now SELECT INTO to import data from Polybase. You know, if you’re into that sort of thing.

Honorable Mentions!

Here are some things that caught my eye, but didn’t really need a whole section explaining them. Still good stuff, though.

Query Store Can Now Wait Like Everybody Else

Query store was a great feature introduced in SQL Server 2016. Now they’ve added the ability to capture wait stats as well. This is going to be useful when trying to correlate badly performing queries and plans with SQL Server’s various waits.

See the sys.query_store_wait_stats system table in your preferred database for the deets. Obviously, you’ll need to turn on Query Store first.

Con The Cat With Strings

Just as minor as TRIM in some people’s books, but this is a great function for CONCAT_WS’ing (it’s not Concatenating, right? That’s a different function) strings with a common separator, ignoring NULLs.

Code:
SELECT CONCAT_WS('-','2017', '09', NULL, '22') AS SomeDate;

Output:
2017-09-22

Get To Know Your Host

sys.dm_os_host_info – This system table returns data for both Windows and Linux. Nothing else, just thought that was neat.

Get Your Model Serviced

Not something I’m jumping for joy about, but it is a good change of pace (I think). No more Service Packs, only Zuul… er… just Cumulative Updates. Check out my article on it if you want to know more about all the Service Model changes.

And Lots Of Other Things

Of course, there’s hundreds other things that are in SQL Server 2017 and related features. I didn’t even touch on the SSIS, SSAS, SSRS, MDS or ML stuff. Check out the shortened list here, broken down by category. Exciting new toys!

-CJ Julius

* Seriously, how do you release a Data Science platform and not include Python? That’s like releasing a motorcycle with only the rear tire. Yes, you can technically use it, but you you’re limiting yourself to a customer base with a very selective skill-set.

Say Goodbye to Service Packs: SQL Server 2017 Won’t Have Them

Take Your Service Pack and Get Outta Town

Service Packs (SPs) have long been an quick litmus test to determining where you are when assessing needed upgrades. You could almost ignore CUs (Cumulative Updates) and use the SP to define where you are and you needed to go. 2008R2 SP3? You’re pretty much all patched up. 2012 SP1? Got a ways to go. But that’s about to change with SQL Server 2017 as Microsoft is doing away with Service Packs, and just releasing sequential updates as CUs.

In SQL2017 and beyond, every CU will be tested like a Service Pack and contain all the updates, hotfixes and security patches of every CU before it. So we can expect to see versions like SQL Server 2017 CU12.

Well, That’s Nice. But Why?

Microsoft wants to move to a more “agile” method, allowing them to get more updates out faster. Releasing many smaller CUs is faster-to-market and means less patching of odds and ends with hotfixes.

Also, this will just simplify the whole process. Instead of saying 2017 SP3 CU2, it will simply be 2017 CU26 (I just made these up. I am not clairvoyant now, but I will be in the future).

There will be two tracks for updates, the main CU path and a GDR (General Distribution Release) path. GDR path is just security updates (maybe a system-breaking hotfix once in a while). This path will be entirely separate from the normal CU path and you will not be able to jump back and forth between them*.

When Will I Get These CUs?

2012/2014/2016 are all still on the old model. Starting with SQL Server 2017 you’ll see this new servicing model.  After RTM, SQL2017 will get a new CU every month for the first year, but will slow down after that. Microsoft’s reasoning is that most of the major fixes are in the first year, so they want to keep ’em coming during this critical phase. For the remainder of the four years of mainstream support, this pace will slow to one CU every quarter.

If you’re on Linux, it’s the same deal. You’ll be able to pull these CUs from the same repositories that you get SQL Server from. This is kind of a big ‘duh’ but I felt it need mentioning.

Anything Else?

Sure there is! Lots of odds and ends for you to know. Like:

  • CUs will accommodate localized content (they didn’t before)
  • CUs will still be released the same time every month
    • That’s the week of the 3rd Tuesday, but you knew that
  • You don’t have to be on a specific CU to be supported.
  • CUs will not contain any “net new” features.
  • CUs can be uninstalled from Windows
  • In Linux, install and run the container from a previous CU to do a rollback

And that’s it. Happy patching!

-CJ Julius

*You can go from the GDR path to the CU path, but not back again. Once you’re on the CU path, you’re there for good.

Simple Stored Procedure to Compress Old Tables

Pretty sure the SQL Server compression code looks like this.

I’m always looking for a way to save space in SQL Server. From archiving old data to just flat out deleting unused objects, I take great joy in removing superfluous stuff. The less junk in the system, the easier it is to focus on the things that matter.

..and fit it in a 10 kg bag

The biggest useless space eaters are tables that are (supposedly) no longer used. I could script them with data to a file, but what if they’re 100+GB? I could also back them up to another DB and then drop them from the database; that would certainly free up the space in the original DB. What if they’re needed for some process that I was unaware of and we can’t wait for the time to restore/move them back?

My conundrum was this. So, I decided to implement a process that looked at a single DBA-controlled schema and compressed every table created prior to a certain date. I could TRANSFER the superfluous table to that schema, and leave it. At some point in the future a job would come along and compress it.

If the data was needed within X days, then the table could easily be transferred back to the original schema, no harm: no foul. Also, I would save space as tables would be automatically PAGE compressed and could be decompressed if needed. De/Compression is really fast in SQL Server.

It’s Compression Time

So, this super-simple stored procedure was created prCompressCleanupTables (click for github link). It takes the following parameters:

  • @CompressBeforeDate – A DATETIME variable that accepts how old the table must be before it is compressed (Looks at the created date)
  • @Schema – Sysname variable that takes the schema name that you want to compress. Keep in mind that this is the same schema for every database, so make sure it’s unique (I use the ‘Cleanup’ schema personally, hence the name).

It skips the following databases by default: master, tempdb, model, msdb, distribution, ReportServer, SSISDB. It will skip any database that is in any state other than ONLINE, too.

Also remember that compression is locked to certain editions of SQL Server, as well as being 2008+ (you really need to upgrade if being 2008 is a limiting factor).

I’m Also A Client

I have this implemented as a job on several servers which checks weekly for new tables to compress in the appropriate databases. It checks for any tables created prior to GETDATE() – 60. I have to say, that it runs very quickly even on large tables.

Let me know if this is helpful to you!

-CJ Julius

Simple Database Inventory Manager 2.3

We got a job.
We got a job.

Always moving forward, here at KnowledgeHunter Corp! (Note: not a real corporation) Just posted the github of the new SDIM 2.3. Changes in just about every corner, but the big overall change is the addition of job information in the data pull.

If you have no idea what SDIMS is or I’m yammering about, here’s the run-down. Otherwise, read on for the 2.3 update news.

Dey took er jerbs
New buttons in 2.3 include 'Jobs' and 'Jobs Extended'
New buttons in 2.3 include ‘Jobs’ and ‘Jobs Extended’

The newest feature is ability to inventory pretty thorough job information. Well, the stuff I’m always looking for anyway. Also included is some extended information, which right now basically includes just a description with a few other identifying columns. Future versions may gather more data and put it here. Powershell handles it well, even with several thousand rows/jobs.

Here is the kind of information that you can expect to see:

  • Server/Instance
  • Job Name
  • Is the Job enabled?
  • Schedule Name
  • Is the Schedule enabled?
  • Schedule Type, Occurrence, Recurrence (if applicable) and Frequency
  • Job Description

This information has been added to the ‘Full Inventory’, so be aware that has a lot of new stuff in it too.

Full disclosure: I used heavily modified version of a few queries from Dattatrey Sindol to pull jobs data.

Always Squashing

Fixed a few bugs:

isProduction column now does not display in Instances list. That was a feature that got removed from everywhere (before 1.0). Finally removed the column.

HIPAA level feature is now completely removed. Never got this one working right and I decided that in the future I’ll go with something a bit more general, like maybe just ‘priority’ or something.

Fixed a few typos. Me spel gud now.

Known Issues:

Too many varchar(max) columns. I know. I know.

Me Want

So, how do you get this? Well…

If you never have used it before, I would suggest going through the walkthrough I put out a while ago. I may in the future build a more simplified version for people who just want to install and don’t care about the behind-the-scenes. (UPDATE: I did just that, here)

Elseif you’re using prior to 2.1 already, then you should probably just drop all the SQL objects and rebuild with the new 2.3 items. This includes the PowerShell datapull and frontend pieces; they’re the only two PowerShell pieces.

Elseif you’re on 2.1 right now you can just replace the items that were changed. I have created a script that will do this for you, because that’s the kind of guy I am. You may need to run the script twice because I didn’t check for object existence. Running it multiple times won’t hurt. It doesn’t touch the Servers table, but it does drop the InstanceList, so if you have that statically assigned, then you’re going to want to back it up and then re-insert the data with the missing columns removed.

And Here’s A Count of All the Bolts

Lastly, here’s the items that were altered in the 2.1 – 2.3 release if you’re curious:

Tables:
dbo.JobList – This holds all the jobs information. Added for new feature (look! No varchar(max)!)
dbo.InstanceList – Removed a column.

Stored Procedures:
dbo.prGetInstances – Altered to fix bug
dbo.prGetInventory – Altered to fix bug
dbo.prGetJobs – New for new feature
dbo.prGetJobsExt – New for new feature
dbo.prGetServersAndInstances – Altered to fix bug
dbo.prInsertJobList – New for new feature
dbo.prUpdateInstanceList – Altered to fix bug

PowerShell:

DB_DataPull – Altered to pull Jobs (new feature) and fix bugs
DB_DataPUll_FrontEnd – Altered to display Jobs/Jobs Extended

Simple Database Inventory Manager 2.1

sdim_ver_2-1_trunc_275x400

A while back, I took on a fairly big project; build an database inventory manager that did the following:

  • Dynamically gathered information on SQL Instances/Databases
  • Dynamically gathered information on the OS underneath
  • Compiled and organized this data in a single repository
  • Provided a client GUI front-end for ease of use.
  • Was built on free and already available tools.

When I initially finished Inventory Manger 1.0 I wrote a 5-part series that took the user through the steps of how I built what I did and how everything worked. This was a good way for me to iron out details and also provide some documentation along the way. As the months have progressed I added updates to the code and altered the posts as necessary.

Then everything went silent as I moved on to other things, but I was constantly going back and adding new features and options. During this interim, I was not able to get the code updated on GitHub, and thus it fell behind. Also, it had morphed into its own beast, moving out of “project” and more into a standard “software” mode. To signify its new direction, its name is now Simple Database Inventory Manager™.

As such, I have added the new 2.1 version to its own GitHub project (previously it existed in the SQL code section there with other unrelated snippets) and will update it accordingly. You can just download the new files and overwrite the old ones to get the new version. Spiffy, right?

db_datapull_frontend_v2-1
Look! 200% more buttons!™

I will keep the old 1.x version in the original repository so it doesn’t break links for the walk-through articles.

All that said, what fabulous prizes are in the new version? Glad you asked!

CMS Integration

Now you can point to your Central Management Server and the Simple Database Inventory Manager™ will pull a list of servers and instances from there. It will then go through all of them recursively, and pull whatever data you want back.

In DB_DataPull.ps1, you can switch this on or off with the -UseCMS switch. If on, then you will need to specify the CMS Server with the -CMSServer ‘SOMESERVER’ parameter.

If you use the -UseCMS option, this will delete all data from the repository tables and repopulate them based on what you have in the CMS. This is backwardly compatible with the old system in that if you don’t use the option (off by default) then it will continue to use the Server and Instance List you provided manually.

Fresh, New Buttons

‘Services’ has been folded into the Inventory heading and Other Info is no more, replaced by the Reporting section (below).

Composite buttons have been created to give better side-by-side information from the GUI. Server\Instance and Instance\DB buttons have been added to do this. I think their names are pretty self-explanatory. Click on them. See what happens.

Reporting

Kind of. A bunch of stock reports were added to the DB_DataPull_FrontEnd.ps1 under the Reporting heading. These rely on Views in the Reporting Schema that build customizable information you want returned.

I’m going to leave this schema (Reporting) pretty much alone for now, so users can create their own views and then use those as datasets if you want for SSRS. You can use SDIM™ as the basis for reports that refresh as often as you want to run DB_DataPull.ps1

Here’s a quick list of the out-of-the-box reports given in the GUI:

Servers Grouped by OS and Service Pack – A count of all Server OS names and versions.

Instances Grouped By SQL Version – A count of all Instances at a specific server version (ignores Service Pack)

Instances Grouped by SQL Version, Edition – A count of all instances by SQL Version and Edition (ignores Service Pack)

Instances Grouped By SQL Version, Edition and SP – A count of Instances based on SQL Server Version, Edition and Service Pack

Bug Fixes/Minor Enhancements

  • Fixed an incorrectly spelled column.
  • Fixed incorrect calculation of Server number.
  • Added extended properties to all tables.
  • Added header at the top of all Views and Procedures

And All the Rest

This (Simple Database Inventory Manager™) is of course provided free of charge, use-at-your-own-risk. There is no warranty either expressed or implied. If SDIM™ burns down your data center, uninstalls all your favorite toolbars and ruins your best pair of dress socks, I’m not at fault. Remember to back up your databases!

And if you’ve skipped over everything just to get to the link, here it is: SDIMS v2.1

-CJ Julius

SQL Server on Linux: First 30 minutes

They put a ring on it.
They put a ring on it.

What I gushed over a few posts ago has finally happened! SQL Server has a come to Linux (sort of). The database engine is now available as CTP1 and you can get it by adding the repository and running the setup script.

You can follow the walk through for your favorite flavor of Linux, so I won’t repeat that here. it’s really very simple, just a matter of pointing to the correct repository and then apt-get install (Ubuntu). It comes with a setup script that pretty much does all the heavy lifting for you. Keep in mind that this is just for preview so there’s not a lot of options and it sticks everything in a single set of directories (logs/data/tempdb).

I had a small problem when I did the install, but it turned out I just needed to update a few packages. In the event you’re not a Linux person, here’s the easiest way to fix this:

$ sudo apt-get update
$ sudo apt-get upgrade

There’s a lot of stuff to dig into in this release, and as newer versions come out I’ll get more in-depth, but I just wanted to make a quick post about what I did in my first thirty minutes.

Behold in awe my INSERT abilities.
Behold in awe my INSERT abilities.

After the install, I connected via SQLCMD, as there is no SSMS in Linux yet, using the sa and sa password set in the install. I then created a table, dropping a single row into it and then selecting. Not terribly complex stuff.

I took care to try different cases, adding and neglecting brackets ‘[]’ and semicolons. It responded how I expected it to react if I was on a Windows system, which is very reassuring. It’s nice that my T-SQL skills translate seamlessly to the Linux environment, at least internally to SQL Server.

Connected via a my own username.
It doesn’t look or act any different than it I would have if connected to a Windows SQL Server instance.

Next, I put my box ‘U64’ on the network and lo-and-behold I was able to remote into it by its Linux hostname from SSMS 2016 on a Windows machine. No additional setup was required. Microsoft appears to be taking this integration of the Linux and Windows environments seriously.

I then created a SQL login for myself and logged in that way. No issues.

Now, as fun as this was, there’s a whole lot missing. The list includes, but is not limited to:

  • Full-text Search
  • Replication
  • Extended Stored Procedures
  • AD authentication
  • SQL Server Agent
  • SSIS
  • SSAS

This is of course just for CTP1, so a lot of these items will probably show up later. I mean, SQL Server without the SQL Server Agent? That doesn’t even make sense (I’m looking at you Express Edition). There is sort of cascade effect as other items like Maintenance Plans and such that rely on these missing features also being MIA.

The gang's all here!
The gang’s all here!

Also, larger items like Availability groups will also be absent because there’s no Linux analogue for them currently. From what the SQL Server team said in their AMA on reddit they’re toying around with RedHat clustering as a replacement for this in the Linux environment.

The last thing I did before the end of my 30 minutes was to look at the version. As you may or may not know, the Linux version is based on SQL Server vNext, which (as the name implies) is the NEXT version of SQL Server. There was some talk about it being a port of SQL Server 2016, which does not appear to be the case.

SELECT @@VERSION
----------------------------------------------------
Microsoft SQL Server vNext (CTP1) - 14.0.1.246 (X64)
Nov 1 2016 23:24:39
Copyright (c) Microsoft Corporation
on Linux (Ubuntu 16.04.1 LTS)

Note that SQL Server 2016 is version 13.0.

And that’s it! As mentioned before I’ll be doing deeper dives into this as time goes on, at the very least with each CTP. But I have to say I’m happy with the results so far. Everything (that was available) worked as I expected it to work. Nice work MS!

-CJ Julius

A Simple Way to Archive Data

We needed a way to archive data.  I have seen this request multiple times in my career and the most common solutions I have seen either used:

  1. INSERT data into the archive table, then DELETE data from the original table, or
  2. SSIS, or
  3. Table partitioning

All of these options are great, but they all have drawbacks that we weren’t happy with.

We needed our process to meet the following criteria:

  1. Archive anything older than 1 year
  2. Store archive data in a separate database
  3. Run the archive process daily
  4. Do not interfere with other database transactions
  5. Minimal administrative overhead (Isn’t this always the case? J)

Once again, all of the options I mentioned in the first paragraph could have met these criteria, and I’m sure there are many other options as well.  However, I came across an article that presented exactly what I needed:

https://www.mssqltips.com/sqlservertip/2259/sql-server-2008-consume-output-directly-from-the-output-command/

It’s not anything new, as it was introduced in the 2008 version, but it is pretty handy.  I like this option because it only accesses the table from which you are archiving a single time (as opposed to option #1 above), and it comes with very low administrative overhead (unlike options #2 and #3 above).

I created a stored proc which uses dynamic SQL to build archive statements which utilize the method from the article.  The proc is called from a SQL Server Agent job, which is run every 10 seconds for a 2 hour period every night.  We had to find the “sweet spot” of how many records to archive at a single time, versus how often to run the job (this is because if we query too many records, we can start blocking user queries, but if we query too few, or run the job too infrequently, then we don’t keep up with the volume of data that needs to be archived).  The other good thing about this method is that if the proc is still running when the next scheduled execution comes up, it will just skip that execution and try again 10 seconds later – in our case, missing a few runs is not a big deal.

The other thing to notice is that the DEADLOCK PRIORITY is set to LOW.  This will ensure that this proc is always the deadlock victim, and not other user queries.

Anyway, here is the link to the project.

Enjoy!

-Clint

Stored Procedure to Back Up Encryption Objects

Hello world!  For anyone that may see this, my name is Clint, and I have been working in enterprise-level IT for nearly a decade – mostly on the database side of things.  My buddy, Mr. Julius, has been kind enough to let me post on this blog, so I hope I do it justice.  I plan to share some cool things here.  Enough introduction.  On with it!

This link will take you to a stored procedure that I wrote to back up encryption objects, specifically Service Master Keys, Database Master Keys, and Certificates.  I am relatively new to working with SQL Server encryption, so I’m  not sure how this proc will evolve, but for now, it backs up all the encryption objects I need to be concerned with.

Why should we back up these objects?  Well, I won’t describe how SQL Server column-level encryption works – there are plenty of sites dedicated to that, and it would take a lot of typing.  In a nutshell, it’s because of this: if you lose one, or more, of these objects for any reason, you risk losing the ability to decrypt your data (which would basically mean it’s lost forever).  Therefore, you need the entire hierarchy of your encryption objects to be intact to have viable, encrypted data.  Some examples of why these objects may need to be backed up include:

  1. Service Master Keys are unique to a SQL Server Instance, so if you want the same key on a separate instance, you must manually restore to it.  These are not carried over in a DB backup/restore operation.
  2. Database Master Keys are re-encrypted when you restore a DB to SQL Server instance with a different Service Master Key, causing the “downstream” objects to also be re-encrypted (i.e. unreadable).   Therefore, even thought these are carried over when you do a backup/restore, if the Service Master Key was different upon restore, you won’t get the desired results.
  3. Certificates can be used to open/close keys (which, in turn, do the actual encryption/decryption).  One certificate can control many keys.  I believe, if you lose the certificate, but still have the keys, you can create a new certificate to control the keys…I am not sure about this!  I still have some testing to do, so I will update this post when I have definitive info.  Either way, to be safe, I back them up anyway.

The above list is not all-inclusive.  There may be many more reasons why you need to backup/restore the objects.

It seems crazy to me that there is not an “out of the box” way to back up these objects through the GUI, and/or schedule it, considering the MSDN page on Service Master Keys says it should be one of your first admin tasks performed on the server.  Maybe it’s a security concern to have something built-in to the product like that.  Who knows?

Some references are below in the event you want to dig a little deeper.   Hopefully this will prove helpful to someone besides me.  Later!

-Clint

References