Tag Archives: technology

SQL Holmes: The Case of the Fist-fighting Log Readers

And the Case of the Fist-Fighting Log Readers

Every once in a blue moon, you run across a problem that no one has seen before. Sometimes you can’t find anything all. Sometimes, you can only find unanswered forum questions. Sometimes, you find the worst thing imaginable: a forum post with someone replying to their own question with “Nevermind. Fixed it.” (HOW DID YOU FIX IT YODARULES1971?!?)

We had a similar experience a while back. Allow me to take you through it.

It started with a single, innocuous alert from one of our SQL Server Replication distributors:

DESCRIPTION: Replication-Replication Transaction-Log Reader Subsystem: agent SOMESERVER-SomeDB-6 failed. The process could not execute 'sp_replcmds' on 'SOMESERVER'.

Note: If you don’t have SQL Server Alerts set up on your instances, then you really really should.

We had two databases in this instance that were replicated. One was chugging along just fine, the other was giving the old log reader chestnut:

The process could not execute 'sp_replcmds' on 'SOMESERVER'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011) Get help: http://help/MSSQL_REPL20011

Experience has taught me that the most likely issues are the following:
1. Somebody changed something (Troubleshooting 101)
2. The owners of the databases involved in replication are wrong (usually not ‘sa’)
3. A database trigger somewhere was trying to make a change cross-database.

“I saw some smoke coming from the barn publication.”  -SQL Replication Monitor

Number 1 had happened, as we had deployed schema changes just prior to everything going south. Two and three however, did not. We could see the error in the replication log. It was trying to parse a Primary Key change that we had made. There was nothing obviously wrong with the command, so we moved on.

And Then We Tried to Fix It

After a few hours trying to troubleshoot, restarting the log readers, crying for our mommies, yadda yadda, the call was made that we would just reinitialize the publication. Ye ole’ replication wreck-n-restart.

It didn’t work. The publication would not reinitialize, failing with the same log reader error as before. This make sense as the log reader is shared among all the publications on an instance. It just couldn’t get over the command that couldn’t be parsed.

With few options left, we did the no-no and reset the log reader.

EXEC sp_replrestart

This is an internal command, run on the publisher, that is used when you need to restore a transactionally replicated database. It basically resets the LSN on the distributor to the highest value on the publisher, for our purposes skipping over the LSN/command that was causing the log reader’s issues.

This is going to be my costume for Halloween next year.

And it worked. Mostly. Replication started to flow again, but we needed to re-initialize the subscribers properly. Re-init was necessary as we’d just skipped all transactions that had accumulated between the time the issue occurred and the current time.

Yes, But What Does It MEAN?!?!

So, we had “resolved” the issue, but we still had no idea what the “issue” really was. After we had righted the ship, applied all dropped indexes at the subscriber we circled back around to determine root cause and why no one else seemed to have any idea what would cause this.

We went to our Junior DBA (Google) and got a pretty narrow range of responses, including the ones I listed before. No one seemed to be having the same issue as us, even with similar errors. So, we had to start at the bottom and work to the top.

Combing through the log reader error logs, one error was different than the others:

Cannot find an object ID for the replication system table 'cdc.change_tables'. Verify that the system table exists and is accessible by querying it directly. If it does exist, stop and restart the Log Reader Agent; if it does not exist, drop and reconfigure replication. (Source: MSSQLServer, Error number: 18807)

Well, that’s different. CDC is enabled on this database and used on quite a few tables. We checked, and the table cdc.change_tables existed; we wouldn’t get very far in life without it. The next step was to look at the CDC error log which returns the last 64 errors encountered.

SELECT * FROM sys.dm_cdc_errors

And in the sys.dm_cdc_errors table, we found this error:

Log scan process failed in processing a ddl log record. Refer to previous errors in the current session to identify the cause and correct any associated problems.

That seems pretty familiar. It was preceded by the following three errors:

Invalid length parameter passed to the RIGHT function.
Log Scan process failed in processing log records. Refer to previous errors in the current session to identify the cause and correct any associated problems.
Log scan process failed in processing a ddl log record. Refer to previous errors in the current session to identify the cause and correct any associated problems.

Different CDC, but I could see why you would be confused.

This points pretty squarely to CDC as the culprit. It uses a log reader with replication to replicate commands for capturing in CDC. Apparently, if it gets hung up, replication panics and just starts punching itself in the face. It’s an interesting design choice.

Way #938,308,121 To Break Replication

A few weeks later, when deploying more schema changes the issue resurfaced. Replication dive-bombed, we got the sames alert and everything came to a halt. We simply went in and blew away CDC for the entire database. This may not be a choice for your environment, but we needed to nuke it from orbit, just be sure.

USE SomeDB
GO
EXEC sys.sp_cdc_disable_db
GO

And after executing sys.sp_cdc_disable_db, the issue went away. Success?

We haven’t re-implemented CDC on this database yet, as there are a lot more decisions to be made business-wise, so we don’t know if there’s something internally broken or what. At some point we will need to turn it back on, but when and to what degree (or fallout) isn’t clear.

What is clear is is that something in CDC goofed and it took replication down with it. Moral of the story is: If replication is having issues, make sure CDC isn’t having issues as well. Also, use CDC sparingly. Don’t just throw it on every table you have “just ’cause.”

-CJ Julius

Say Goodbye to Service Packs: SQL Server 2017 Won’t Have Them

Take Your Service Pack and Get Outta Town

Service Packs (SPs) have long been an quick litmus test to determining where you are when assessing needed upgrades. You could almost ignore CUs (Cumulative Updates) and use the SP to define where you are and you needed to go. 2008R2 SP3? You’re pretty much all patched up. 2012 SP1? Got a ways to go. But that’s about to change with SQL Server 2017 as Microsoft is doing away with Service Packs, and just releasing sequential updates as CUs.

In SQL2017 and beyond, every CU will be tested like a Service Pack and contain all the updates, hotfixes and security patches of every CU before it. So we can expect to see versions like SQL Server 2017 CU12.

Well, That’s Nice. But Why?

Microsoft wants to move to a more “agile” method, allowing them to get more updates out faster. Releasing many smaller CUs is faster-to-market and means less patching of odds and ends with hotfixes.

Also, this will just simplify the whole process. Instead of saying 2017 SP3 CU2, it will simply be 2017 CU26 (I just made these up. I am not clairvoyant now, but I will be in the future).

There will be two tracks for updates, the main CU path and a GDR (General Distribution Release) path. GDR path is just security updates (maybe a system-breaking hotfix once in a while). This path will be entirely separate from the normal CU path and you will not be able to jump back and forth between them*.

When Will I Get These CUs?

2012/2014/2016 are all still on the old model. Starting with SQL Server 2017 you’ll see this new servicing model.  After RTM, SQL2017 will get a new CU every month for the first year, but will slow down after that. Microsoft’s reasoning is that most of the major fixes are in the first year, so they want to keep ’em coming during this critical phase. For the remainder of the four years of mainstream support, this pace will slow to one CU every quarter.

If you’re on Linux, it’s the same deal. You’ll be able to pull these CUs from the same repositories that you get SQL Server from. This is kind of a big ‘duh’ but I felt it need mentioning.

Anything Else?

Sure there is! Lots of odds and ends for you to know. Like:

  • CUs will accommodate localized content (they didn’t before)
  • CUs will still be released the same time every month
    • That’s the week of the 3rd Tuesday, but you knew that
  • You don’t have to be on a specific CU to be supported.
  • CUs will not contain any “net new” features.
  • CUs can be uninstalled from Windows
  • In Linux, install and run the container from a previous CU to do a rollback

And that’s it. Happy patching!

-CJ Julius

*You can go from the GDR path to the CU path, but not back again. Once you’re on the CU path, you’re there for good.

Simple Stored Procedure to Compress Old Tables

Pretty sure the SQL Server compression code looks like this.

I’m always looking for a way to save space in SQL Server. From archiving old data to just flat out deleting unused objects, I take great joy in removing superfluous stuff. The less junk in the system, the easier it is to focus on the things that matter.

..and fit it in a 10 kg bag

The biggest useless space eaters are tables that are (supposedly) no longer used. I could script them with data to a file, but what if they’re 100+GB? I could also back them up to another DB and then drop them from the database; that would certainly free up the space in the original DB. What if they’re needed for some process that I was unaware of and we can’t wait for the time to restore/move them back?

My conundrum was this. So, I decided to implement a process that looked at a single DBA-controlled schema and compressed every table created prior to a certain date. I could TRANSFER the superfluous table to that schema, and leave it. At some point in the future a job would come along and compress it.

If the data was needed within X days, then the table could easily be transferred back to the original schema, no harm: no foul. Also, I would save space as tables would be automatically PAGE compressed and could be decompressed if needed. De/Compression is really fast in SQL Server.

It’s Compression Time

So, this super-simple stored procedure was created prCompressCleanupTables (click for github link). It takes the following parameters:

  • @CompressBeforeDate – A DATETIME variable that accepts how old the table must be before it is compressed (Looks at the created date)
  • @Schema – Sysname variable that takes the schema name that you want to compress. Keep in mind that this is the same schema for every database, so make sure it’s unique (I use the ‘Cleanup’ schema personally, hence the name).

It skips the following databases by default: master, tempdb, model, msdb, distribution, ReportServer, SSISDB. It will skip any database that is in any state other than ONLINE, too.

Also remember that compression is locked to certain editions of SQL Server, as well as being 2008+ (you really need to upgrade if being 2008 is a limiting factor).

I’m Also A Client

I have this implemented as a job on several servers which checks weekly for new tables to compress in the appropriate databases. It checks for any tables created prior to GETDATE() – 60. I have to say, that it runs very quickly even on large tables.

Let me know if this is helpful to you!

-CJ Julius

(Almost) Everything is Going Open Source Now… and I LOVE it.

Why can't we be friends?
Why can’t we be friends?

While I’m putting together my big update on Inventory Manager, I thought I’d take some time to throw confetti into the air. There may be some excited clapping as well. I warned you.

I largely see myself as platform-agnostic. While I think that certain companies do individual products well, I also believe it’s fair to say that none of them do everything well. I use Android phones and Apple tablets, Linux for home (mostly) and Windows at work. Heck, I’ve got a Roku and a Chromecast because they both do things that the other doesn’t.  I’m all over the map, but all over the map is a great place to be, especially in the tech industry now.

Despite all of this, I have to admit I am partial to Free Open-Source Software (FOSS). Give me a choice between Ubuntu and Windows, and all other things being equal, I’ll choose the Debian-based option. I’ll admit my biases.

So, when MS started moving in this direction I was happy. I wanted to see this trend continue, and boy has it. First of all…

1. .NET Core is now running on Redhat.

When Microsoft announced that .Net was going open-source, I was cautiously optimistic. I’m not a big .Net coder, but I could see the benefit and was hopeful that MS would continue down this path.  This lead to some cool things that I thought I’d never see in a million years, like .Net running on Redhat.

There’s understandably some cynicism about Microsoft’s true intentions, as well as their long term goals, but this is the cross-over that I’ve been wanting to happen for a while. Blending the strengths of RHEL with .NET on top is a great start. If the .NET development platform can be ported, why not parts of the Windows Management Framework? We could even one day see…

2. Powershell on OSX and Linux.

I didn’t always like Powershell, in fact prior to Powershell 3, I just referred to it as PowerHell. Since 4.0, however, it’s no secret that I’m a fan; one look at my github will tell you that. I like its logical approach to (most) things and that it works for simple scripts quite easily, while being a powerhouse (no pun intended) behind the scenes.

Sorry, THIS is the coolest thing ever.
This is the coolest thing ever.

This shell coming to OSX and Linux will be a boon for both systems. While I am, and will probably always be, a bash scripting guy, Powershell in Windows just makes everything so gosh-darn easy. If I could whip up a PS1 script with a few imported modules and attach it to a cron job with ease, then I think everybody wins,  mostly me. But, if I decide that I want to use bash instead, that’s okay because…

3. Bash is running on Windows.

This isn’t a one way transition. Microsoft is making a trade, bringing one of the most widely used shells to Windows. This not only makes scripts more portable, but also knowledge.

Have some ultra-fast Linux bash script that works wonders? Super, you now have it Windows, too. Wrote a script to do some directory work in Powershell? Great, you now know how to do it in Linux.

You can't tell me that isn't the coolest thing ever.
I’m sorry, THIS is the coolest thing ever.

There are very few downsides to this, other than the obvious security issues and that it isn’t truly a stand-alone shell (it’s part of Ubuntu on Windows). In any case, it allows interoperability  between software from different systems. This is great now that…

4. SQL Server is on Linux.

This isn’t technically going open source, as it will run inside a container, but the idea that this will now be possible and supported is like something out of my greatest dreams.

I have a maybe-controversial opinion that SQL Server is the best relational database system out there. For all its faults, I’d rather use SQL Server 2005 SP1 than Oracle 12c. Just the way I feel, and for reasons I won’t go into here. I hope the things I like about SQL Server translate to the Linux environment.

The fact that Ubuntu is supporting this with Microsoft is great. I can’t wait to use my favorite OS with my favorite database engine on the same system.

Last thoughts

There are other items I’ve glossed over, but these are the big ones to me. Soon, we will be able to run SQL Server on Ubuntu Linux with cron jobs executing Powershell for a .Net application that resides on an RHEL box. *excited clapping* (I warned you.)

It’s a great time to be in the tech industry.

-CJ Julius

 

Creating a Simple Database Inventory Manager with Powershell – Part IV: GUI Front-End

Last Time, on Inventory Manager…
Now we make things pretty.
Let’s make things pretty.

Our data pull script has run, the database contains all our server  \ instance .database information and flowers are blooming; things are good. If this doesn’t sound like anything you have done, head back to the Introduction to see if you missed something.

Now it’s time to get everything connected so we can just fire up a GUI and press some buttons, to get the data we need fast.

How the Sausage is Made

If you’ve been following along with this series, and you’ve set up everything as instructed, then you should be able to download the pre-made GUI script and run it out of the box. If you’re pointing to a custom instance or database just change the $RepositoryInstance and/or $RepositoryDB before firing it up. If you want to learn more about how this was put together, keep reading. If you don’t care how the whazits work, you’re done.

At the top of our list is to create a form with buttons and give them names so we can call them in our Powershell. You can either build the form manually with this guide here, or use Visual Studio*. I’m going to be using the latter method because it’s the most versatile, and frankly the easiest. If you use the former, then you’re kind of on your own. Sorry.

In the  Visual Studio method (I’ll be using 2013 Ultimate) you’ll be utilizing Windows Forms and then running them through a “cleaner” to make them Powershell ready. This guide at FoxDeploy explains the whole thing spectacularly and shows you how to create some very complicated UI’s that are Powershell-friendly. I’d recommend going through Parts I and II as they’ll be the things you’ll need to create what we’re going to use here and then come back. Don’t worry, I’ll wait.

Then We Build

Got the GUI code? Cool. The first part of Stephen’s code uses a -replace to filter the Windows Form code and make it work in Powershell. I took that piece and made it a second script so I could just have the clean version of the XAML in my final code. You can find that code here.

Just copy/paste your <Window>…</Window> code over the commented area and run the script. It will spit out the final code and tell you all of the objects you can tie actions to (Name, Value). Then drop in Stephen’s XAML reader code to the main script with the cleaned code and you should have a GUI… that does nothing.

Whee.
Whee.

As I mentioned before, when you pushed the XAML code through WPF_to_PSForm.ps1 it will tell you what the objects are on your form. For our purposes, this is simply a few buttons that need to be tied to stored procedures. We do this though .Add_Click() as in the example below:

$WPFBt_All_Data.Add_Click(
{
$sqlCommand = "
EXEC dbo.prGetInventory;
"
$dataset = Invoke-SQL -datasource $RepositoryInstance -database $RepositoryDB -sqlCommand $sqlCommand
Write-Host $dataset
$dataset | Out-GridView -Title "Database Inventory"
}
)

Nothing crazy that we haven’t been doing other than using Out-GridView. This cool little cmdlet pushes datasets out to a customizable table with filtering, sorting the ability to remove columns etc. -Title “SomeTitle” is the window title.

Sample sorted Database List with a few columns removed.
Sample sorted Database List with a few columns removed.

Once you’ve coded all of the buttons, then add the form display at the bottom, using out-null to suppress messages:

$Form.ShowDialog() | out-null

And that’s done. A Winner is You!

What now?

Using these scripts, you can go out and grab any information from the servers\instances you specify, pull it back into a centralized location and then use a GUI front-end to make fine tuning and retrieval easy. As I stated previously, this is a bare-bones system to centralize your database information. You can gather any piece of information from the Server, Instance or Database level by using the same tools that are currently collecting and retrieving this information.

It’s been a long journey, but thanks for sticking with it! If you want to make any alterations to the code or tighten it up (Odin knows that it needs it), feel free to make the changes and shoot them back to me. I’ll definitely give you credit for significant changes in this blog or the code itself.

Also, and I think this goes without saying, but if you want to use this in your personal or business environment: have at it! Just please make sure you give me proper credit, with maybe a link back to my blog/Twitter/Linkedin? That’d be super cool of you.

Thanks again and happy Inventorying!

–CJ Julius

*Full disclosure: I have not tried this with the Community version of Visual Studio, so all the features may not be there.

Creating a Simple Database Inventory Manager with Powershell – Part III: Data Pull

Powershell time; no really.
DB_inven_Mgmt_PS1_sm_III
I come bearing scripts.

Now it’s time to get to get this thing moving. We’re going to go out to each of our server\instances and pull back the information for our tables, updating them with the stored procedures from the last section.

We’re going to be looking at this script [DB-DataPull.ps1]. It’s about as simple as I could get it for our needs. There’s not a lot of frills, but it’s a good cop and it. gets. results.

If you think you missed something you can go back to Part II: Stored Procedures or check out the Introduction.

Get this Jalopy on the Road

The only thing you need to do is specify where the repository is. If the repository is on your local machine in the DBAdmin database then you need to change nothing.

$RepositoryInstance = '(local)'
$RepositoryDB = 'DBAdmin'

After that you’re done. Seriously. The rest of this post is going to be about the nuts and bolts of the script and what does what and why. If you’re looking to just get it fired up then you’re done. Be gone with you.

What’s in the box?

The first few functions (Get-Type and Out-DataTable) are required to turn multi-line WMI-Object output into DataTables so we can insert them into the Repository. These have been cleaned up and/or modified to fit our needs but are based on the code in the two links I provided.

The Invoke-SQL function is a pared-down version of a pretty popular script for sending dynamic SQL directly to a SQL server. There’s not much to be said about this one other than it opens a connection, sends the command and returns the results as a datatable.

Time to get into the meat of the process. First up, let’s grab all the Instance information using the stored procedure we built in the last post.

$ConnectionString = Invoke-SQL -datasource $RepositoryInstance -database $RepositoryDB -sqlCommand "
EXEC dbo.prGetConnectionInformation;
"

Using a foreach loop to cycle through the rows and thus connecting to each instance to pull the information. We’ll remove the ‘\\MSSQLSERVER’ part since that will actually break our connection, even though it’s the name of the instance (For more information on why this is, see every other Microsoft product ever created).

foreach ($Row in $ConnectionString.Rows)
{
Try
{
$SubConnection = $($Row[0]) -replace '\\MSSQLSERVER',''
$InstanceID = $($Row[2])
Write-Debug $InstanceID
Write-Debug $SubConnection
$Version = Invoke-SQL -datasource $SubConnection -database master -sqlCommand "
SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition'), @@VERSION
"
}
...

And then with another loop we use dbo.UpdateInstanceList to push all that into our database.

Invoke-SQL -datasource $RepositoryInstance -database $RepositoryDB -sqlCommand "
EXEC dbo.prUpdateInstanceList
@MSSQLVersionLong = '$MSSQLVersionLong'
,@MSSQLVersion = '$MSSQLVersion'
,@MSSQLEdition = '$MSSQLEdition'
,@MSSQLServicePack = '$MSSQLServicePack'
,@InstanceId = $InstanceID
"

That’s it for the Instance information, let’s get the database information. We use the same process to generate the connections as we did before, so I’m going to skip that. The only change you should note is the inclusion of the statement TRUNCATE TABLE dbo.DatabaseList since we are going to completely repopulate it. This way no matter if databases are added or removed, we’re starting each pull with a clean slate.

We get our data via a cte…

$DataPull = Invoke-SQL -datasource $SubConnection -database master -sqlCommand "
with fs
as
(
select database_id, type, size * 8.0 / 1024 size
from sys.master_files
)
select
$InstanceID AS 'InstanceId',
name,
(select sum(size) from fs where type = 0 and fs.database_id = db.database_id) AS DataFileSizeMB
from sys.databases db
ORDER BY DataFileSizeMB
"

…and push it into the Repository via our stored procedure.

Invoke-SQL -datasource $RepositoryInstance -database $RepositoryDB -sqlCommand "
EXEC dbo.prInsertDatabaseList
@DatabaseName = '$DatabaseName'
,@InstanceListId = '$InstanceListId'
,@Size = $Size"

Lastly, we’ll get Service and Server information with the same rinse-and-repeat method, with one notable exception. If you try to return the results of a WMI-Object and parse it into a SQL table, then you’re going to have a bad time.

This is where our two functions from the beginning come in to play. Out-DataTable and its sidekick Get-Type return the results into the proper type for our foreach loop.

$ServerInfo = Get-WmiObject win32_Service -Computer $Row[0] |
where {$_.DisplayName -match "SQL Server"} |
select SystemName, DisplayName, Name, State, StartMode, StartName | Out-DataTable

Now, if you run EXEC dbo.prGetInventory on your Repository database, you should see all of the information you could ever want right there. Magic.

But Wait, There’s More!

Now we’ve got all the data in one place, which is nice and all, but what if we want to get this information quickly? Sure we can jump into SSMS and run the procedures that have the data we want. However, I propose we make a GUI front-end so we can win friends and get free drinks.

DB_DataPull_FrontEnd_GUI_2
Something like this?

We’ll do that in Part IV: The Voyage Home GUI Front-End.

–CJ Julius

Creating a Simple Database Inventory Manager with Powershell – Part I: Building the Repository Database

I've got it. We'll put the databases in a database
I’ve got it! We’ll put the databases in a database.

This first part is simply setting up the database and the tables underneath of them. I’ve tried to make this as painless as possible by providing scripts do do most of this for you. I’ll use this infrastructure piece to explain some of the data that we’ll be pulling back.

This is by no means an exhaustive list of all the information that can be retrieved, but it serves as a foundation to show how all data could be pulled back to these tables. The only limit is determining how you’re going to retrieve this data.

This part will require some legwork from you as entering the ServerName and InstanceName with corresponding Id fields will be necessary. You should only have to do this once*.

If you’re unclear about what all this is then, and think you missed something, check out the Introduction.

[DBAdmin_QA]

[DBAdmin_QA.sql] – Setup script for 2012 can be retrieved here.

This is the database that will act as the repository for all of the information we want to collect. You can call it whatever you want, as you can change this in the code later, but I don’t recommend it the first time around. Why make it more complicated than it needs to be?

Make sure you take a look at the settings and verify they fit to your environment. It is intentionally small and grows slowly as we most likely will not be making it larger by leaps and bounds.

You will want to also make sure that the user you will be making updates/deletes/etc as, has full access to this database. This is a ‘durh’, but I always have to say it.

[dbo].[ServerList]

[dbo.ServerList.sql] – Setup script for 2012 can be retrieved here.

This is our master list of all Servers that contain SQL instances we want to know about. Initially, you’ll want to insert all the Server Names (fully qualified if necessary for your environment) leaving the other fields blank. I don’t put things in bold unless they’re important.

An example of T-SQL code to insert these items:

INSERT INTO dbo.ServerList
( ServerName )
VALUES
('SOMESERVER'),
('SOMEOTHERSERVER')

Notable Columns:

Id – Identity Column for all Servers
ServerName – Manually entered Server Name
IPAddress – IP Address of Server
OSName – Name of the Operating System
OSServicePack – Number of the Service Pack

[dbo].[InstanceList]

[dbo.InstanceList.sql] – Setup script for 2012 can be retrieved here.

Contains all of the Instances and the proper ServerListId (Foriegn Key to dbo.ServerList.Id) as well as some related information. You’ll need to INSERT all of the Instances and match them up to the ServerList.Id column when you inserted the servers.

At some point in the future I may add to the code where it dynamically builds the InstanceList, but that has not been done. This is because there are some instances that ‘exist’ on servers are shut down or disabled in some way. This allows the Instance to be listed in your Full Inventory even if it can’t be accessed. Wouldn’t be much of an Inventory if we weren’t able to inventory it.

An example of T-SQL code to insert these items:

INSERT INTO dbo.InstanceList
(InstanceName,ServerListId )
VALUES
('NAMEDINSTANCE',1)
,('MSSQLSERVER',2)

If you’re a lazy bum and don’t like matching ServerList.Id’s to InstanceList.ServerListId:

INSERT INTO dbo.InstanceList
( InstanceName
,ServerListId
)
SELECT 'NAMEDINSTANCE',
sl.Id
FROM dbo.ServerList sl
WHERE sl.ServerName = 'SOMESERVER'

UPDATE: I’ve made this even easier with a new stored procedure Utility.prInsertNewServerAndInstance. If this is your first time seeing this then there was no update and this procedure has always been here. Look! Over there! Something distracting!

Notable Columns:

InstanceName – Manually entered name of the Instance
ServerListId  – Manually entered FK to dbo.ServerList.Id
MSSQLVersion – Version of MSSQL running this Instance
MSSQLVersionLong – The long-form version of previous column
MSSQLServicePack – Current Service Pack of the MSSQL engine
MSSQLEdition – Edition of the MSSQL engine
isProduction – Manual bit flag to designate a production instance

[dbo].[ServiceList]

[dbo.ServiceList.sql] – Setup script for 2012 can be retrieved here.

The dbo.ServiceList table contains information on MSSQL services running on the server. That does include the MSSQL Database Engine, but also other items such as Reporting Services or Full Text Services.

dbo.ServiceList is dynamically filled so there is no need to add any information to this table.

Notable Columns:

ServiceDisplayName – The Display Name of the Service
ServiceName – The system name for the service
ServiceState – State of the Service (ie “Running”)
ServiceStartMode – Start Mode of Service (ie “Auto”)
ServiceStartName – User the Service is running as

[dbo].[DatabaseList]

[dbo.DatabaseList.sql] – Setup script for 2012 can be retrieved here.

This table is the dynamically generated list of all the tables in the database. As you’ll see later in the Powershell, anything we can pull back from sys.databases (or anything we can join to it) can be put in this table or a related table.

Notable Columns:

DatabaseName – Name of the database (crazy, right?)
SizeInMB – Size of the database returned in Megabytes

dbo.DatabaseList is dynamically filled so there is no need to add any information to this table.

Database Ready

And that’s it for the tables and database. Keep in mind that this is a basic structure that can be the core of any setup you’d like. Any data you can capture via T-SQL or Powershell can be compiled and put into these tables or better yet, other related tables.

In the next post, I’ll talk about how we’re going to be allowing this information to be put into the database via stored procedures.

 

*Unless you forgot to make backups and accidentally wipe the tables. In this case, you can think about your mistake while you re-enter all the data again.

–CJ Julius

Creating A Simple Database Inventory Manager with Powershell – Introduction

Which databases names have the letter “B”?

And then we shall rule the world!
And then we shall rule the world!

All DBAs should keep track of their Servers/ Instances/ etc not only for their own edification, but for Management and security reasons as well. If you’re not, then you need to, as it comes in incredibly handy even if it isn’t a requirement of the job.

Most of the time, this information is compiled into a spreadsheet of some kind or possibly in a word processing document somewhere. Keeping this data up-to-date and accurate is a pain, especially when you have to break it out into multiple tabs and/or over multiple documents.

You could get a full-blown inventory manager that collects and compiles all the data and organizes it for you. But there’s a definite cost to that solution and not one that all companies will find useful (Read: “It’s not in the budget this quarter”).

What if you can’t get someone to shell out the money for a product like that? Then you have to either keep with the spreadsheets (yuck) or you need to find another solution with the tools you have.

What’s the Catch?

A simple frontend for your Database Inventory
A simple front-end for your Database Inventory

So, this is my attempt to resolve this issue using two tools that any MSSQL DBA should have: Powershell and SQL Server. I will point to other software products or versions both paid and free below, but the core code should run using things you should already have. That said, here’s my software list:

Required:

Preferred:

Also I will be making a few assumptions:

  1. Your infrastructure security is set up using Active Directory.
  2. Setting up a new instance is already done and you can connect to it.
  3. Your personal login or the login you are using to execute the code is able to query the relevant system tables and server info on each of the target systems.
  4. You know your current Server\Instance setup.
  5. There is no fifth thing.

What You Get

By the time this series is finished you’ll have a simple GUI front-end (shown above) for current data with all of your servers, instances and all the information you could want about them.

We can pull back and organize any data that SQL Server or Windows Server can spit out including, but not limited to:

  • OS Version, Service Pack.
  • SQL Services running, their statuses and logon information.
  • SQL Server Instance names, versions, and editions.
  • SQL Server Database names and sizes.
  • Ability to dynamically re-pull any of this information if needed.

I will walk you through my solution piece by piece over 4 posts (5 including this intro) that will consist of the following parts. This list will be updated with links to the different sections as they are released.

Part I: Building the Repository Database and Tables
Part II: Creating the Repository Stored Procedures
Part III: Coding the Data-Pulling Powershell
Part IV: Putting together the GUI Front-end

Updates:
2016-11-27
Addendum I: Simple Database Inventory Manager 2.1
2017-05-20
Addendum II: Simple Database Inventory Manager 2.3

See you soon in Part I: Building the Repository Database and Tables

–CJ Julius

A Collection of Collections of Free Microsoft Books

Image modified from a free one provided by http://www.norebbo.com/
Microsoft has lots of free stuff out there. Image provided by norebbo.

There are a lot of free materials out there for learning Microsoft products, and suprisingly (or not?) a lot of them are from Microsoft themselves. I thought I’d take a moment to organize and collect my list of free resources in the hopes that not only will it help me organize and find what I need, but also help others of you who don’t know about this stuff.

The one main source I’m using here is the MSDN MSsmallBiz  blog with posts by Eric Ligman. There are a massive number of titles to look at, but I’ve not seen them compiled into one place. Keep in mind that some of these are older and all the links may not work. I will update this list in the future if I find new/interesting free education materials in this genre.

The Collections

Huge Collection of 60+ MS titles on various topics

This was the first list to go up and start the whole series. Almost all of the offerings come in multiple formats (PDF, EPUB, MOBI).

Noteworthy sections:

Visual Studio 2010 – Office 365 – Windows 8 – SQL Server 2012


 

Large Collection of 20+ MS titles on various topics

The second in the series, and the least interesting of the groups, but it does come with some interesting titles. This group only comes in one format: PDF.

Noteworthy sections:

Own Your Space (a book for teens, no really) – SQL Server 2012 Dev Training Kit


 

Gigantic Collection of 200+ MS titles on various topics

The last group contains quite a few of the previous two sections (but not all, I’ve found). Most are in PDF or DOCX (word) format with a few in portable and non-portable formats thrown in.

Noteworthy sections:

MS Office – Powershell 4.0 (this stuff is really good) – CRM – Quick Start Guide group – even more SQL Server 2012

Bonus

If you’re looking for information on specific Microsoft technologies or if you’re gearing up for an MS cert, check out the Microsoft Virtual Academy.  They’ve got kind of a neat gamification thing going on where you get points for completing certain courses.

 

-CJ Julius

Syncing Between Linux and Windows with BitTorrent

Skip the insecure Cloud with BitTorrent Sync
Skip the insecure Cloud with BitTorrent Sync

I’ve always been a DIY kind of guy when it came to technology, and the idea of giving my data to cloud services such as Dropbox or Box.com (and whoever has access to that data besides them) seemed a little iffy. The cloud, as great as it is for some things, isn’t really built for too much security. Keeping data private on an internal system is hard enough, but throwing it out to the internet only multiplies these issues.

That’s where BitTorrent Sync comes in. Built by BitTorrent Labs (and using the BitTorrent Protocol), this solution boasts that it will allow you to sync between different OSes, securely, and without throwing any of it out to the cloud. This increases security incredibly, and isn’t that hard to set up. I put it on my Linux laptop (Stu) and a Windows 8 desktop (Zer0), both of which I’ve used in previous projects. It works, but it has a few caveats as you’ll see below.

Installation on Linux

Linux installation is fairly easy, if a bit obtuse. Instead of an installer of any kind, the package for BitTorrent Sync comes with a License.txt file and a single btsync binary. To start up the software, simply unpack it, navigate to the containing folder in a terminal and run the ./btsync command. That’s it.

[code]$ cd /Location/of/File
$ ./btsync[/code]

The Linux binary can be configured through the webGUI (kinda) or the more robust sync.conf file.
The Linux binary can be configured through the webGUI (kinda) or the more robust sync.conf file.

However, unlike it’s Windows and MacOS brethren, there’s no independent GUI to use. You’ll need to open a browser and head to a webpage to administer it. In most cases you can use the address 127.0.0.1:8888

From there you can select the folder you want to sync as well as generate a secret key for said location. The key is to allow other computers on your network to access the folder securely. Barring any conflicting firewall settings on your local machine, this should just be a matter of putting in the secret when you add a folder.

If you need the key from a folder you’ve set up previously, you can get it again from the gear icon next to the listing in BitTorrent Sync. Also, if you head to the Advanced tab you can grab a “Read-Only” secret. If you use this key when setting up another computer, it will read from the folder but never write to it. This is useful if you want the updates to go only one way or you want to give someone the ability to see what’s on your machine without running the risk of them deleting or altering the files.

Installation on Windows

Next, I went to Zer0, my Windows machine, and installed the software. From what I understand, the Windows and MacOS versions are pretty much the same, so other than the intricacies of the Mac platform the installation and use should be very similar.

The Windows application is a little plain, but gets the job done.
The Windows application is a little plain, but gets the job done.

After running the installer, you’ll be presented with a page that has several tabs. Go to the “Shared Folders” tab and click on “Add”. Put in the secret from the share that we want to access and click “Okay”. It should have all the information it needs to connect and start syncing. Mine did it automatically and pulled the four or so test files with no further work on my part.

You can also add a local folder and sync it here. By default it’s the btsync folder in your Documents directory. I just left this as it is for my testing purposes.

Tweaking the System

Now that it’s set up, you can do a few more things to shape it to your preferences. As you first may have noticed you can add any number of folders to sync, for no cost unlike most cloud services. So if your primary concern is just moving files back and forth behind the scenes (as I do) then that’s probably this setup’s greatest strength beyond security.

There are further options as well that fall into the more advanced users’ category. On the Preferences page in both the Linux WebGUI and the Windows application, you can set rate limits, alter whether the software loads at boot and some other odds and ends. In the Advanced section, you can do even more. Here’s a quick rundown of these options:

The conf file has pretty good explanations for every editable line
The conf file has pretty good explanations for every editable line

disk_low_priority: If True, BitTorrent Sync will set itself to Low Priority on the system. Turn this on if you’re noticing serious speed problems when using BitTorrent Sync

lan_encrypt_data: If True, BitTorrent Sync will encrypt data sent over the local network. Turn this on if you want to hide your traffic from others who may be using the same network as you.

lan_use_tcp: If True BitTorrent Sync will use TCP instead of UDP for local transfers. Will use more bandwidth but will be (at least theoretically) more reliable.

rate_limit_local_peers: If True, BitTorrent Sync will apply rate limits (set in General Preferences) to local users. By default rate limits are only applied to external peers (those not on your network).

In Linux, these options as well as a few others are all stored in the configuration of btsync. You’ll need to go to the folder that you have btsync running in to access it. First, you’ll probably want to output a sample configuration and open it in a text editor to see all options you have. There are quite a few.

[code]$ ./btsync –dump-sample-config > sync.conf
$ gedit sync.conf[/code]

It’s pretty self-explanatory, but I want to direct your attention to the username/password fields. Remember that webpage we went to earlier to set up the shared folder on Linux? Well it’s actually hosted from your machine, meaning that anyone who as the access to the network can pull up your BitTorrent Sync options and mess with them. So it might behoove you to set this option.

Once you’ve organized things the way you want them in your sync.conf file, save it. Now, you can import it back into the BitTorrent Sync application by running btsync with the modified conf file as such:

[code]$ ./btsync –config sync.conf[/code]

Worth the Effort?

And that’s pretty much the ins-and-outs of the BitTorrent Sync application. I imagine that I’ll be using this not as my primary software to sync things between machines or as backups, but I will have it move files and folders from one machine to another periodically. Perhaps one could set up a backup drive on a server that just copies one way from all the machines that are linked to it. I imagine that could be a project for a different day.

On the whole this is a nice piece of software that pretty much does what it says it’s going to do, and securely. I know it’s Linux, but the lack of a real GUI and the complication of editing advanced options by way of the .conf file is kind of a downer. I’m totally fine with using the command line (in some cases I prefer it), but that drags down the score a bit on this one because it’s not very user friendly. Still, a fine piece of software that I will definitely be utilizing in the future.

Rating: 4.5/5 – Pretty darn good. However, the Linux version takes a little work to get customized and the Windows/MacOS advanced pages are a little confusing at first.

-CJ Julius