Tag Archives: schema

Simple Stored Procedure to Compress Old Tables

Pretty sure the SQL Server compression code looks like this.

I’m always looking for a way to save space in SQL Server. From archiving old data to just flat out deleting unused objects, I take great joy in removing superfluous stuff. The less junk in the system, the easier it is to focus on the things that matter.

..and fit it in a 10 kg bag

The biggest useless space eaters are tables that are (supposedly) no longer used. I could script them with data to a file, but what if they’re 100+GB? I could also back them up to another DB and then drop them from the database; that would certainly free up the space in the original DB. What if they’re needed for some process that I was unaware of and we can’t wait for the time to restore/move them back?

My conundrum was this. So, I decided to implement a process that looked at a single DBA-controlled schema and compressed every table created prior to a certain date. I could TRANSFER the superfluous table to that schema, and leave it. At some point in the future a job would come along and compress it.

If the data was needed within X days, then the table could easily be transferred back to the original schema, no harm: no foul. Also, I would save space as tables would be automatically PAGE compressed and could be decompressed if needed. De/Compression is really fast in SQL Server.

It’s Compression Time

So, this super-simple stored procedure was created prCompressCleanupTables (click for github link). It takes the following parameters:

  • @CompressBeforeDate – A DATETIME variable that accepts how old the table must be before it is compressed (Looks at the created date)
  • @Schema – Sysname variable that takes the schema name that you want to compress. Keep in mind that this is the same schema for every database, so make sure it’s unique (I use the ‘Cleanup’ schema personally, hence the name).

It skips the following databases by default: master, tempdb, model, msdb, distribution, ReportServer, SSISDB. It will skip any database that is in any state other than ONLINE, too.

Also remember that compression is locked to certain editions of SQL Server, as well as being 2008+ (you really need to upgrade if being 2008 is a limiting factor).

I’m Also A Client

I have this implemented as a job on several servers which checks weekly for new tables to compress in the appropriate databases. It checks for any tables created prior to GETDATE() – 60. I have to say, that it runs very quickly even on large tables.

Let me know if this is helpful to you!

-CJ Julius

A Few Useful Commmands for MongoDB

MongoDB NoSQL database
MongoDB NoSQL database

About four months ago I was handed an already existing MongoDB infrastructure and told to take care of it. It’s been a long road, and I’ve had to train up quite quickly on this system.

While I’m certainly no expert at this point, it is what I’ve been spending the bulk of my time doing. Now that I’ve got som actual administration time under my belt I thought I’d relay a few commands and tricks that I wished I’d known from the outset.


db.setProfilingLevel(level, slowms)

This turns on the performance monitoring in MongoDB. The level portion sets the level of the profiling (logging) you want.

0 = Off
1 = Only slow operations
2 = All operations

The slowms is optional, but sets the threshold for what is considered a “slow” operation. It is 100ms by default and generally you can leave it there.

Keep in mind that if you are using the cloud-based MMS solution from MongoDB, this will send information to them. If your business falls under HIPAA or some similar security requirements, you may not want to use this.

Note: You can find out if profiling is enabled by using db.getProfilingStatus()

show profile

show profile

This, when used in conjunction with the previous command will show the last 5 slow operations. Very useful if you think there might be an errant query or index problem that is slowing down your DB.



db.serverCmdLineOpts() will give you output similar to this.
db.serverCmdLineOpts() will give you output similar to this. The “parsed” section is probably what you’re most interested in.

I was handed an already existing infrastructure and had to do a little bit of walking backwards through it to put my own configuration documentation together.

This is a command that tells you what command line options were being used when this particular mongo instance was started. It includes everything entered in the command line as well as anything used in the config file.

Collection Schema

var findschema = db.Collection.findOne();
for (var key in findschema) {print (key) ; }

While MongoDB is a schemaless* database, it does help to know what is being stored where. To build a layout of the database, you can use the above two lines to report all the keys in a Collection. Make sure you swap out the Collection portion to the collection you want to find the schema for.


db.runCommand( { logRotate : 1})

If you’re running MongoDB on Linux, then you have a few more choices via syslog, but if you’re like me and using it in Windows, then this is really your only option to rotate the logs.

That is, it stops writing to one log file and starts writing to another, appending the previous log file with a time stamp. This rotates all logs as far as I can tell, including audit logs if you’re using those. I personally run this command once a month just to keep the files manageable.

More to Come

This is just a quick starter set that I had specifically noted as “things I wished I knew when I started,” and I’ve got quite a few more. In the future I’ll delve into some of my simpler metrics and stat commands that I use in conjunction with MMS.

-CJ Julius

* It’s not really schema-less as MongoDB does use a schema, but it doesn’t enforce that schema. Objects can be added and removed as needed.