Tag Archives: linux

5 (more) New Features in SQL Server 2017

FETCH NEXT 5

About a week ago I posted about the Top 5 Things I’m most excited to see in SQL Server 2017. As you may have noticed, I just focused on the Engine/Agent and not on anything else. To be fair (and I’m always fair to myself), most of the big exciting changes were there.

However, I did want to give the rest of the SQL Server features their due. To rectify that oversight, here’s 5 more new things in SQL Server 2017 that I’m excited about.

1. SSIS On Linux
A Linux Cluster.
Image Source: Visual Hunt

This is a bit of cheat, because I already went over SQL Server on Linux, but this I thought deserved special notice. Did you know you can run your SSIS packages from your Linux box with SQL Server now? You can.

Just pop in this little one-liner and you’re off to the races*.

$ dtexec /F \<package name \> /DE <protection password>

If push came to shove you could probably put that on a cron job should you want. You still need a Windows server to create and maintain the packages, but you can run them locally from the box if you’re trying to keep the family together for the kids.

* Certain Terms and Conditions may apply. See your dealer for details.

2. Machine Learning Services
Python!

This is a feature that kinda existed previously, but it was just called “R” Services. The big thing of note is that it now supports Python and the associated libraries. See previous post in this series to catch my sarcasm about Python not being included in the first place.

Thing to note about Machine Learning Services is that it’s not supported in-database on Linux. You can still do things like native scoring (PREDICT), but that’s just about the long and the short of it. Microsoft is making noises like they’re going to address this in the somewhat-near future.

3. SSIS Scale-out

This is a pretty neat feature that I hadn’t thought about before. What if you had several servers that potentially COULD handle an SSIS workload (in an HA scenario or something), but you didn’t want to always target the same instance. You know, spread the love around.

SQL Server 2017 allows you to set up a master on your main instance and then workers on the servers you want to be able to scale-out to. After a bit of setup on your worker machines you can then either target machines with specific packages or let SSIS decide. Check out this walkthrough for more.

You turn this feature on (assuming you’ve set it up properly) in the SSIS Catalog Properties.
Image Source: Microsoft

 

4. New SSRS Web portal

This is less a new feature, and more of a major revamp to something that already existed. The new Reporting Services default Web Portal is a lot snazzier and has some new things. You can customize branding the instance and even develop KPIs that are contextual to the folder you are currently viewing.

Purdy.
Image Source: Microsoft

 

5. MDS Performance Improvements

Master Data Services has had a rough life. Beginning life as far back as SQL Server 2008R2, this has been the red-headed stepchild of the SQL Server offerings. It started out, in my humble opinion, barely usable, unnecessarily complex and just feature-poor. I’m not alone in this opinion.

Subsequent releases have helped it, but even after several versions it still was pretty weak and only useful for very specific cases. MDS only really came into its own in 2016, but with some performance limitations.

Pictured: Master Data Services circa 2008R2.
Image Source: Visual Hunt

Edging ever closer to a more perfect product, SQL Server 2017 features some much needed performance optimization allowing it to stage millions of rows in a reasonable amount of time. It was painfully slow previously with only a few hundred thousand records.

Lastly, they fixed the slow UI movement when doing things like expanding folders on certain pages.

Honorable Mentions

Not any this time, unless you want to talk about SSAS object-level Security or DAX finally getting an IN operator. Those seem pretty useful.

The End?

That’s it for 2017. There are, of course, many many more changes and new features in SQL Server 2017, but I think 10 or so is good enough to give you a taste. There are changes all across the product and I encourage you to look them over yourself.

-CJ Julius

Top 5 New Features in SQL Server 2017 (that I care about)

New Year, New Database Engine

http://icons8.com/
This is either a database or a stack of licorice pancakes.

It’s finally here! A few days ago as of this writing, SQL Server 2017 was released for Windows, Ubuntu, RedHat and Docker. There are a lot of new things in SQL Server 2017 from Python support in Machine Learning* to better CLR security. But I thought I’d narrow down the list to changes that I’m most interested in.

1. sudo Your Way To A Better SQL Server

SQL SERVER IS ON LINUX NOW! No surprise this is the first thing on my list as I keep going on and on about it. But it’s really here!

Installation is a cinch, especially if you’re a Linux or Unix person. Curl the GPG keys, add the repository, apt-get (if you’re on a real Distro) the installer and run the setup. It’s really that easy.

If you want it even easier, then check out Docker. Slap it in and go!

All the main features are there, the engine (of course), agent, Full-text search, DB Mail, AD authentication, SQL command-line tools etc. Pretty much everything you need to get going on Linux even if you’re in a Windows-dominated environment.

2. It’s Like A Car That Repairs Itself

So this is a big one. With automatic tuning on, SQL Server can detect performance problems, recommend solutions and automatically fix (some) problems. There are two flavors of this, one in SQL Server 2017 and one in Azure; I’ll be talking about the one in SQL 2017 here.

Automatic plan choice correction is the main feature and it checks whether a plan has regressed in performance. If the feature is so enabled, it reverts to the old plan. As you can see below, a new plan was chosen (4) but it didn’t do so well. SQL 2017 reverted to the old plan (3) automatically and got most of the performance back.

Image Source: Microsoft

I’m sure that Microsoft will be expanding this feature in the future and we can expect to see more from this. Azure already has automatic Index tuning in place, so we’ll probably see that in the On-Prem version eventually.

3. Indexes That Start, Stop And Then Start Again

This is a feature I didn’t know I needed. Basically, it allows an online index rebuild that has stopped or failed (say, it ran out of disk space) to be resumed. The index build fails, you fix whatever made it fail, and then resume the rebuild, picking up from where it left off. The rebuild will be in the ‘PAUSED’ state until you’re ready to RESUME or ABORT it.

Code:
-- Start a Resumable index rebuild
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable]
REBUILD WITH (ONLINE=ON, RESUMABLE=ON, MAX_DURATION=60)

-- PAUSE the rebuild:
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable] PAUSE

/* If you'd like to resume either after a failure or because we paused it. This syntax will also cause the resume to wait 5 minutes and then kill all blockers if there are any. */
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable]
RESUME WITH (MAX_DURATION= 60 MINUTES,
WAIT_AT_LOW_PRIORITY (MAX_DURATION=5, ABORT_AFTER_WAIT=BLOCKERS))

/*Or if you just want to stop the whole thing because you hate unfragmented indexes */
ALTER INDEX [NCIX_SomeTable_SomeColumn] on [dbo].[SomeTable] ABORT

This is also great if you want to pause a rebuild because it’s interfering with some process. You can PAUSE the rebuild, wait for the transaction(s) to be done and RESUME. Pretty neat.

4. Gettin’ TRIM
A cute Sheepy.
There hasn’t been a picture in a while and I was afraid you might be getting bored.

This one is kind of minor, but it excited me a lot because it was one of the first things I noticed as a DBA that made me say ‘Why don’t they have a function that does that?’ (note the single quotes). TRIM will trim a string down based on the parameters you provide.

If you provide no parameters it will just cut off all the spaces from both sides. It’s equivalent to writing RTRIM(LTRIM(‘ SomeString ‘))

Code:
SELECT TRIM( '.! ' FROM ' SomeString !!!') AS TrimmedString;

Output:
SomeString

5. Selecting Into The Right Groups

Another small important change. In previous versions of SQL Server, you could not SELECT INTO a specific Filegroup when creating a new table. Now you can, and it uses the familiar ON syntax to do it.

Code:
SELECT [SomeColumn] INTO [dbo].[SomeNewTable] ON [FileGroup] from [dbo].[SomeOriginalTable];

Also, you can now SELECT INTO to import data from Polybase. You know, if you’re into that sort of thing.

Honorable Mentions!

Here are some things that caught my eye, but didn’t really need a whole section explaining them. Still good stuff, though.

Query Store Can Now Wait Like Everybody Else

Query store was a great feature introduced in SQL Server 2016. Now they’ve added the ability to capture wait stats as well. This is going to be useful when trying to correlate badly performing queries and plans with SQL Server’s various waits.

See the sys.query_store_wait_stats system table in your preferred database for the deets. Obviously, you’ll need to turn on Query Store first.

Con The Cat With Strings

Just as minor as TRIM in some people’s books, but this is a great function for CONCAT_WS’ing (it’s not Concatenating, right? That’s a different function) strings with a common separator, ignoring NULLs.

Code:
SELECT CONCAT_WS('-','2017', '09', NULL, '22') AS SomeDate;

Output:
2017-09-22

Get To Know Your Host

sys.dm_os_host_info – This system table returns data for both Windows and Linux. Nothing else, just thought that was neat.

Get Your Model Serviced

Not something I’m jumping for joy about, but it is a good change of pace (I think). No more Service Packs, only Zuul… er… just Cumulative Updates. Check out my article on it if you want to know more about all the Service Model changes.

And Lots Of Other Things

Of course, there’s hundreds other things that are in SQL Server 2017 and related features. I didn’t even touch on the SSIS, SSAS, SSRS, MDS or ML stuff. Check out the shortened list here, broken down by category. Exciting new toys!

-CJ Julius

* Seriously, how do you release a Data Science platform and not include Python? That’s like releasing a motorcycle with only the rear tire. Yes, you can technically use it, but you you’re limiting yourself to a customer base with a very selective skill-set.

SQL Server on Linux: First 30 minutes

They put a ring on it.
They put a ring on it.

What I gushed over a few posts ago has finally happened! SQL Server has a come to Linux (sort of). The database engine is now available as CTP1 and you can get it by adding the repository and running the setup script.

You can follow the walk through for your favorite flavor of Linux, so I won’t repeat that here. it’s really very simple, just a matter of pointing to the correct repository and then apt-get install (Ubuntu). It comes with a setup script that pretty much does all the heavy lifting for you. Keep in mind that this is just for preview so there’s not a lot of options and it sticks everything in a single set of directories (logs/data/tempdb).

I had a small problem when I did the install, but it turned out I just needed to update a few packages. In the event you’re not a Linux person, here’s the easiest way to fix this:

$ sudo apt-get update
$ sudo apt-get upgrade

There’s a lot of stuff to dig into in this release, and as newer versions come out I’ll get more in-depth, but I just wanted to make a quick post about what I did in my first thirty minutes.

Behold in awe my INSERT abilities.
Behold in awe my INSERT abilities.

After the install, I connected via SQLCMD, as there is no SSMS in Linux yet, using the sa and sa password set in the install. I then created a table, dropping a single row into it and then selecting. Not terribly complex stuff.

I took care to try different cases, adding and neglecting brackets ‘[]’ and semicolons. It responded how I expected it to react if I was on a Windows system, which is very reassuring. It’s nice that my T-SQL skills translate seamlessly to the Linux environment, at least internally to SQL Server.

Connected via a my own username.
It doesn’t look or act any different than it I would have if connected to a Windows SQL Server instance.

Next, I put my box ‘U64’ on the network and lo-and-behold I was able to remote into it by its Linux hostname from SSMS 2016 on a Windows machine. No additional setup was required. Microsoft appears to be taking this integration of the Linux and Windows environments seriously.

I then created a SQL login for myself and logged in that way. No issues.

Now, as fun as this was, there’s a whole lot missing. The list includes, but is not limited to:

  • Full-text Search
  • Replication
  • Extended Stored Procedures
  • AD authentication
  • SQL Server Agent
  • SSIS
  • SSAS

This is of course just for CTP1, so a lot of these items will probably show up later. I mean, SQL Server without the SQL Server Agent? That doesn’t even make sense (I’m looking at you Express Edition). There is sort of cascade effect as other items like Maintenance Plans and such that rely on these missing features also being MIA.

The gang's all here!
The gang’s all here!

Also, larger items like Availability groups will also be absent because there’s no Linux analogue for them currently. From what the SQL Server team said in their AMA on reddit they’re toying around with RedHat clustering as a replacement for this in the Linux environment.

The last thing I did before the end of my 30 minutes was to look at the version. As you may or may not know, the Linux version is based on SQL Server vNext, which (as the name implies) is the NEXT version of SQL Server. There was some talk about it being a port of SQL Server 2016, which does not appear to be the case.

SELECT @@VERSION
----------------------------------------------------
Microsoft SQL Server vNext (CTP1) - 14.0.1.246 (X64)
Nov 1 2016 23:24:39
Copyright (c) Microsoft Corporation
on Linux (Ubuntu 16.04.1 LTS)

Note that SQL Server 2016 is version 13.0.

And that’s it! As mentioned before I’ll be doing deeper dives into this as time goes on, at the very least with each CTP. But I have to say I’m happy with the results so far. Everything (that was available) worked as I expected it to work. Nice work MS!

-CJ Julius

(Almost) Everything is Going Open Source Now… and I LOVE it.

Why can't we be friends?
Why can’t we be friends?

While I’m putting together my big update on Inventory Manager, I thought I’d take some time to throw confetti into the air. There may be some excited clapping as well. I warned you.

I largely see myself as platform-agnostic. While I think that certain companies do individual products well, I also believe it’s fair to say that none of them do everything well. I use Android phones and Apple tablets, Linux for home (mostly) and Windows at work. Heck, I’ve got a Roku and a Chromecast because they both do things that the other doesn’t.  I’m all over the map, but all over the map is a great place to be, especially in the tech industry now.

Despite all of this, I have to admit I am partial to Free Open-Source Software (FOSS). Give me a choice between Ubuntu and Windows, and all other things being equal, I’ll choose the Debian-based option. I’ll admit my biases.

So, when MS started moving in this direction I was happy. I wanted to see this trend continue, and boy has it. First of all…

1. .NET Core is now running on Redhat.

When Microsoft announced that .Net was going open-source, I was cautiously optimistic. I’m not a big .Net coder, but I could see the benefit and was hopeful that MS would continue down this path.  This lead to some cool things that I thought I’d never see in a million years, like .Net running on Redhat.

There’s understandably some cynicism about Microsoft’s true intentions, as well as their long term goals, but this is the cross-over that I’ve been wanting to happen for a while. Blending the strengths of RHEL with .NET on top is a great start. If the .NET development platform can be ported, why not parts of the Windows Management Framework? We could even one day see…

2. Powershell on OSX and Linux.

I didn’t always like Powershell, in fact prior to Powershell 3, I just referred to it as PowerHell. Since 4.0, however, it’s no secret that I’m a fan; one look at my github will tell you that. I like its logical approach to (most) things and that it works for simple scripts quite easily, while being a powerhouse (no pun intended) behind the scenes.

Sorry, THIS is the coolest thing ever.
This is the coolest thing ever.

This shell coming to OSX and Linux will be a boon for both systems. While I am, and will probably always be, a bash scripting guy, Powershell in Windows just makes everything so gosh-darn easy. If I could whip up a PS1 script with a few imported modules and attach it to a cron job with ease, then I think everybody wins,  mostly me. But, if I decide that I want to use bash instead, that’s okay because…

3. Bash is running on Windows.

This isn’t a one way transition. Microsoft is making a trade, bringing one of the most widely used shells to Windows. This not only makes scripts more portable, but also knowledge.

Have some ultra-fast Linux bash script that works wonders? Super, you now have it Windows, too. Wrote a script to do some directory work in Powershell? Great, you now know how to do it in Linux.

You can't tell me that isn't the coolest thing ever.
I’m sorry, THIS is the coolest thing ever.

There are very few downsides to this, other than the obvious security issues and that it isn’t truly a stand-alone shell (it’s part of Ubuntu on Windows). In any case, it allows interoperability  between software from different systems. This is great now that…

4. SQL Server is on Linux.

This isn’t technically going open source, as it will run inside a container, but the idea that this will now be possible and supported is like something out of my greatest dreams.

I have a maybe-controversial opinion that SQL Server is the best relational database system out there. For all its faults, I’d rather use SQL Server 2005 SP1 than Oracle 12c. Just the way I feel, and for reasons I won’t go into here. I hope the things I like about SQL Server translate to the Linux environment.

The fact that Ubuntu is supporting this with Microsoft is great. I can’t wait to use my favorite OS with my favorite database engine on the same system.

Last thoughts

There are other items I’ve glossed over, but these are the big ones to me. Soon, we will be able to run SQL Server on Ubuntu Linux with cron jobs executing Powershell for a .Net application that resides on an RHEL box. *excited clapping* (I warned you.)

It’s a great time to be in the tech industry.

-CJ Julius

 

A Few Useful Commmands for MongoDB

MongoDB NoSQL database
MongoDB NoSQL database

About four months ago I was handed an already existing MongoDB infrastructure and told to take care of it. It’s been a long road, and I’ve had to train up quite quickly on this system.

While I’m certainly no expert at this point, it is what I’ve been spending the bulk of my time doing. Now that I’ve got som actual administration time under my belt I thought I’d relay a few commands and tricks that I wished I’d known from the outset.

db.setProfilingLevel

db.setProfilingLevel(level, slowms)

This turns on the performance monitoring in MongoDB. The level portion sets the level of the profiling (logging) you want.

0 = Off
1 = Only slow operations
2 = All operations

The slowms is optional, but sets the threshold for what is considered a “slow” operation. It is 100ms by default and generally you can leave it there.

Keep in mind that if you are using the cloud-based MMS solution from MongoDB, this will send information to them. If your business falls under HIPAA or some similar security requirements, you may not want to use this.

Note: You can find out if profiling is enabled by using db.getProfilingStatus()

show profile

show profile

This, when used in conjunction with the previous command will show the last 5 slow operations. Very useful if you think there might be an errant query or index problem that is slowing down your DB.

db.serverCmdLineOpts()

db.serverCmdLineOpts()

db.serverCmdLineOpts() will give you output similar to this.
db.serverCmdLineOpts() will give you output similar to this. The “parsed” section is probably what you’re most interested in.

I was handed an already existing infrastructure and had to do a little bit of walking backwards through it to put my own configuration documentation together.

This is a command that tells you what command line options were being used when this particular mongo instance was started. It includes everything entered in the command line as well as anything used in the config file.

Collection Schema

var findschema = db.Collection.findOne();
for (var key in findschema) {print (key) ; }

While MongoDB is a schemaless* database, it does help to know what is being stored where. To build a layout of the database, you can use the above two lines to report all the keys in a Collection. Make sure you swap out the Collection portion to the collection you want to find the schema for.

logRotate

db.runCommand( { logRotate : 1})

If you’re running MongoDB on Linux, then you have a few more choices via syslog, but if you’re like me and using it in Windows, then this is really your only option to rotate the logs.

That is, it stops writing to one log file and starts writing to another, appending the previous log file with a time stamp. This rotates all logs as far as I can tell, including audit logs if you’re using those. I personally run this command once a month just to keep the files manageable.

More to Come

This is just a quick starter set that I had specifically noted as “things I wished I knew when I started,” and I’ve got quite a few more. In the future I’ll delve into some of my simpler metrics and stat commands that I use in conjunction with MMS.

-CJ Julius

* It’s not really schema-less as MongoDB does use a schema, but it doesn’t enforce that schema. Objects can be added and removed as needed.

Linux Computer Forensics: Deft Linux 8.0b

Deft Linux 8.0b is out and it's looking great.
Deft Linux 8.0b is out and it’s looking great.

A month or so ago I did a walk-through of some simple computer forensics using Deft 7 Linux (Carve and Sift: My Primer to Linux Computer Forensics). There have been several other versions of this distro to come out since then, but now that the beta for 8.0b has been released publicly, it marks a slight shift in the way Deft handles.

While my previous guide is still valid, there are a few additions that really place this version above its predecessors. Now, I’m not going to go through every change, you can do that by going to their website, but there are some really neat features that I’d like to point out.

New Feel

The first thing that will hit you when you start Deft 8.0b is the new layout. While the base operating system is still Ubuntu (Lubuntu to be precise) the LXDE desktop has been further customized from its 7.x version and now looks and feels like its own OS rather than a 1-off from an Ubuntu derivative. The menu is themed for Deft 8.0 with a little 8-ball and more icons have been added to the bottom panel.

The Desktop is more reserved and better organized.
The Desktop is more reserved and better organized.

[Screenshot of Deft 7]
(Opens in a New Window)

The desktop still has the LXTerminal (a must) and the evidence folder, but gone is the “Install” option. Since this is a beta version it is unclear whether this is gone forever or if it will be back later. 8.0b is certainly installable as the boot menu attests.

Guymanager, a very nice disk managing/imaging tool, has been added as well as the file manager for quick access. You’ll see in my screenshots that there is a “Get Screenshot” icon on the desktop, but that was added by me for this article and is not default.

The menu panel is almost entirely new, with only LxKeyMap being carried over with the standard desktop selector. There is a whole host of new software moved in, some from previous versions of Deft but were housed in the menu (like Autopsy) or on the command line. All-in-all this is a good move, as the most used programs are put front and center and the more specialist and less-used are in the easily navigable menus.

New Software

GuyManager is a welcome addition to Deft 8.
GuyManager is a welcome addition to Deft 8.

Deft 8.0b brings a lot of new software to the distro by default and the latest versions of most of it. This version is 64-bit only, and able to work in up to 256TB of RAM. Previous versions could only “see” 4GB because of the 32-bit limitation.

Again, their post on the update gives a broader view of the changes, but there are a few that I wanted to note in summary:

  • Cyclone is now at 0.2 and appears to be mostly the same as before. I’m assuming the changes are back-end.
  • Sleuthkit 4.0 stable is now included, but the Deft devs say that 4.1 will be on the official 8.0 release. [Website]
  • Guymanager 0.7.1, mentioned before, is a very nice forensics tool/disk mounting utility. [Website]
  • Tor is now available pre-installed with browser. I’ve not much use for this, but it is an increasingly-popular internet-access method. [Website]

Skype Xtractor is also new and is probably my favorite addition to Deft 8. While I’m not a criminal investigator, and I’m generally only using the distro for file-recovery, its future utility could be invaluable. Skype Xtractor is a command-line program that extracts the tables from Skype’s main.db and chatsync files and outputs them to html. So far, you can only get it on Deft 8, but it’s so useful I can’t imagine that it won’t show up elsewhere.

New Everything Else

SciTE is a new-ish text editor to Deft 8 and is the sole resident of the new Programming menu.
SciTE is a recently added text editor and is the sole resident of the new Programming menu.

Almost every other piece of software has gotten an update since Deft 7 and some have been given GUI front-ends, which is nice for beginners or those not terribly familiar with Linux command-line. The focus on 64-bit architectures with this version will mean that it probably won’t supplant my use of Deft 7 completely; there are quite a few machines in use out there that are single-core systems.

If you’re familiar with Deft 7, then I’d recommend getting 8 and using it on your 64-bit machines when able, since everything that was in the previous version is in this one (even though it’s beta) and better. Switch back to 7 only if you have to do so. However, if you’re new to computer forensics then I’d recommend sticking to 7 or waiting for the official Deft 8 release which should be very soon.

-CJ Julius

Syncing Between Linux and Windows with BitTorrent

Skip the insecure Cloud with BitTorrent Sync
Skip the insecure Cloud with BitTorrent Sync

I’ve always been a DIY kind of guy when it came to technology, and the idea of giving my data to cloud services such as Dropbox or Box.com (and whoever has access to that data besides them) seemed a little iffy. The cloud, as great as it is for some things, isn’t really built for too much security. Keeping data private on an internal system is hard enough, but throwing it out to the internet only multiplies these issues.

That’s where BitTorrent Sync comes in. Built by BitTorrent Labs (and using the BitTorrent Protocol), this solution boasts that it will allow you to sync between different OSes, securely, and without throwing any of it out to the cloud. This increases security incredibly, and isn’t that hard to set up. I put it on my Linux laptop (Stu) and a Windows 8 desktop (Zer0), both of which I’ve used in previous projects. It works, but it has a few caveats as you’ll see below.

Installation on Linux

Linux installation is fairly easy, if a bit obtuse. Instead of an installer of any kind, the package for BitTorrent Sync comes with a License.txt file and a single btsync binary. To start up the software, simply unpack it, navigate to the containing folder in a terminal and run the ./btsync command. That’s it.

[code]$ cd /Location/of/File
$ ./btsync[/code]

The Linux binary can be configured through the webGUI (kinda) or the more robust sync.conf file.
The Linux binary can be configured through the webGUI (kinda) or the more robust sync.conf file.

However, unlike it’s Windows and MacOS brethren, there’s no independent GUI to use. You’ll need to open a browser and head to a webpage to administer it. In most cases you can use the address 127.0.0.1:8888

From there you can select the folder you want to sync as well as generate a secret key for said location. The key is to allow other computers on your network to access the folder securely. Barring any conflicting firewall settings on your local machine, this should just be a matter of putting in the secret when you add a folder.

If you need the key from a folder you’ve set up previously, you can get it again from the gear icon next to the listing in BitTorrent Sync. Also, if you head to the Advanced tab you can grab a “Read-Only” secret. If you use this key when setting up another computer, it will read from the folder but never write to it. This is useful if you want the updates to go only one way or you want to give someone the ability to see what’s on your machine without running the risk of them deleting or altering the files.

Installation on Windows

Next, I went to Zer0, my Windows machine, and installed the software. From what I understand, the Windows and MacOS versions are pretty much the same, so other than the intricacies of the Mac platform the installation and use should be very similar.

The Windows application is a little plain, but gets the job done.
The Windows application is a little plain, but gets the job done.

After running the installer, you’ll be presented with a page that has several tabs. Go to the “Shared Folders” tab and click on “Add”. Put in the secret from the share that we want to access and click “Okay”. It should have all the information it needs to connect and start syncing. Mine did it automatically and pulled the four or so test files with no further work on my part.

You can also add a local folder and sync it here. By default it’s the btsync folder in your Documents directory. I just left this as it is for my testing purposes.

Tweaking the System

Now that it’s set up, you can do a few more things to shape it to your preferences. As you first may have noticed you can add any number of folders to sync, for no cost unlike most cloud services. So if your primary concern is just moving files back and forth behind the scenes (as I do) then that’s probably this setup’s greatest strength beyond security.

There are further options as well that fall into the more advanced users’ category. On the Preferences page in both the Linux WebGUI and the Windows application, you can set rate limits, alter whether the software loads at boot and some other odds and ends. In the Advanced section, you can do even more. Here’s a quick rundown of these options:

The conf file has pretty good explanations for every editable line
The conf file has pretty good explanations for every editable line

disk_low_priority: If True, BitTorrent Sync will set itself to Low Priority on the system. Turn this on if you’re noticing serious speed problems when using BitTorrent Sync

lan_encrypt_data: If True, BitTorrent Sync will encrypt data sent over the local network. Turn this on if you want to hide your traffic from others who may be using the same network as you.

lan_use_tcp: If True BitTorrent Sync will use TCP instead of UDP for local transfers. Will use more bandwidth but will be (at least theoretically) more reliable.

rate_limit_local_peers: If True, BitTorrent Sync will apply rate limits (set in General Preferences) to local users. By default rate limits are only applied to external peers (those not on your network).

In Linux, these options as well as a few others are all stored in the configuration of btsync. You’ll need to go to the folder that you have btsync running in to access it. First, you’ll probably want to output a sample configuration and open it in a text editor to see all options you have. There are quite a few.

[code]$ ./btsync –dump-sample-config > sync.conf
$ gedit sync.conf[/code]

It’s pretty self-explanatory, but I want to direct your attention to the username/password fields. Remember that webpage we went to earlier to set up the shared folder on Linux? Well it’s actually hosted from your machine, meaning that anyone who as the access to the network can pull up your BitTorrent Sync options and mess with them. So it might behoove you to set this option.

Once you’ve organized things the way you want them in your sync.conf file, save it. Now, you can import it back into the BitTorrent Sync application by running btsync with the modified conf file as such:

[code]$ ./btsync –config sync.conf[/code]

Worth the Effort?

And that’s pretty much the ins-and-outs of the BitTorrent Sync application. I imagine that I’ll be using this not as my primary software to sync things between machines or as backups, but I will have it move files and folders from one machine to another periodically. Perhaps one could set up a backup drive on a server that just copies one way from all the machines that are linked to it. I imagine that could be a project for a different day.

On the whole this is a nice piece of software that pretty much does what it says it’s going to do, and securely. I know it’s Linux, but the lack of a real GUI and the complication of editing advanced options by way of the .conf file is kind of a downer. I’m totally fine with using the command line (in some cases I prefer it), but that drags down the score a bit on this one because it’s not very user friendly. Still, a fine piece of software that I will definitely be utilizing in the future.

Rating: 4.5/5 – Pretty darn good. However, the Linux version takes a little work to get customized and the Windows/MacOS advanced pages are a little confusing at first.

-CJ Julius

How I Got My Android Tablet to Boot Windows 95

I was rummaging through some old software of mine a few weeks ago and taking stock of the old operating systems that I had commercially. I noticed that along with some older versions of Redhat and Ubuntu Server, I owned every version of Windows since 95, including quite a few server versions. I wondered what I could possibly do with them, since I don’t even use my store-bought copy of Windows XP anymore.

Hey, I remember you.
Hey, I remember you.

Then I looked at my new Galaxy Note 10.1 tablet and got an idea. I wondered if I could get Windows 95 to boot on it. So, I fired up Virtual Box and an old machine I had and got to work.

Note: I am using Ubuntu 12.04LTS and a Galaxy Note 10.1 to do this project. Also, I had access to another, older machine with which I could install Windows 95 myself. Your mileage may vary.

Build 95

There are a few ways to go about this. One is to use Virtual Box to create working Windows 95 VDI file and then convert that to an IMG after you’ve got it running and another is to just find a computer with Windows 95 and make an image of the drive. Either way you’ll have to do three things:

  1. Install DOS 5.x or better before installing Windows.
  2. Install Windows 95 and get it working.
  3. Make your image (.IMG) file.
Click to Enlarge
In Virtual Box, you’ll need to set up an MS-DOS environment first and then probably migrate to 95 later.

Now, I’ve tried both ways, and they’re both complex. In the first example, using Virtual Box to create a Windows 95 compatible area for the OS to work in is a pain. This is because the Windows 95 disk is not bootable (and neither is Windows 98 for that matter). You have to have DOS 5.x or later installed first and THEN go to Windows 95. This is as much work today as it was back in when Win95 came out.

Then, once you have Windows 95 running you need to get all the drivers (and you’ll probably have to use an older version of Virtual Box because of compatibility issues), some of them custom-made, install them, and squash bugs as they come up.

When you have everything set up Virtual-Box side, you can convert the VDI to an IMG file to make it usable with the vboxmanage command in termninal:

vboxmanage clonehd Win95.vdi Win95.img --format RAW

This is not the method I recommend, as it is the hardest even with a walk-through, however it may be the easiest for people with limited access to hardware. I had, luckily, a piece of hardware that would run Win95 with minimal effort so I went that route.

First, I put I installed MS-DOS 5.0.7 (available legally and for free here) from some image files to actual real-live 720KB disks. Yes, I still have a few of those. Then I set up my CD-ROM*, no small feat, and began the Windows 95 install.

Once this had been done, I pulled the HDD out of the computer and connected it to an IDE slot in another machine. I then used the dd command to make a raw image file of the newly-added drive. This ended up giving me a large file because I had given a Gig of space to the virtual drive so I’d have lots of space to move around. You could probably get away with only 200 or 300 MB if you wanted to do so. In any case, the command to image the drive was:

dd if=/where/drive/is/mounted/ of=where/you/want/image/ bs=4K

Now I had my Windows 95 image and it was time to get it running on the tablet!

Install 95

There are multiple ways to get Windows to run on your tablet once you have an image you like. I personally went through my version and pulled out all the things I didn’t want so I could create a smaller image. I eventually got the entire thing down to 200MB, but that was with a lot of work. There are also two ways to get the image running on your tablet. There’s the way I did it initially, and then the easy way. I’ll be showing the easy way and then give a brief overview of the more difficult path.

The Easy Way

You’re also going to want to use something like AirDroid, which I’ve reviewed before, to move the files over because chances are you’re going to be doing this a lot. As you make tweaks or move different things back and forth that GUI is going to come in real handy.

Click to Enlarge
After you put in the image location and name, it will need to copy it to the SDLlib’s directory, probably on your internal memory.

Move your image file over to your device and take note of its location. You’ll probably want to write it down or something, make sure you note the CASE of the letters, because that will be very important. Also you’ll need to make sure you have enough space to copy the image over to the working directory of the emulator that we’re going to use here in a minute. So you’ll need at least twice the space of the original IMG file to use it.

Go to the Play Store and find Motioncoding’s Emulator. It looks like an Android with the Windows XP flag colors on it. Download, install and run it.

Once running, go through the menus (using the forward/back buttons, it really couldn’t be more simple) until it asks you to install libSDL and do so. Then select the option under “Import from Library” to Add Custom Images. Name the image whatever you want and put in the path to the image in there. For example, mine is:

/storage/extSdCard/SDL/Win95.img

Select the image from My Images and continue to the end. You should see your OS boot.

The Hard Way

The reason I’m putting the hard way on here is because it gives you a bit more control over your install and, I think at least, runs a bit faster. In any case I’m going to assume that you’re doing it this way because you’re a little more experienced/curious and don’t need me to hold your hand.

Click to Enlarge
Copying over the SDL apk and related software.

Step one is getting a working version of the SDL apk and installing it. You can do a quick Google search for it, but I’m not sure of the legal ramifications (or its copyright) so I’m not putting a direct link here. Keep in mind that you will need to allow “Apps from Unknown Sources” to be installed on your device. This can usually be found in the “Application Settings” area, depending on your version of Android.

Place your Win95 image in the SDL folder with the APK and rename it c.img, and load SDLlib. You may have to do more tweaking at this point as Networking didn’t work out-of-the-box for me. I needed to modify some already existing .bin and .inf files to coax them into doing what I needed to do, and even then it’s a little haphazard. You’ll need to have some method of editing the img file if you can’t get networking going or you’re going to need to re-image the drive every time you want to make a change.

This way you’ll also have access to the BIOS and VGABIOS bin files, if needed, but I didn’t end up touching them.

Android 95

My reasons for doing this were purely academic. I just wanted to see if I could get it to boot and get it usable. After several weeks of poking at it I was, by all of the above methods, able to get 95 and 98 going this way. Windows 98 was just a matter of upgrading 95 and creating a new image file. I can’t think of many reasons to do this other than for the learning experience, though there are lots of pieces of software out there that don’t work so well in modern versions of Windows and maybe you want to take them with you.

Click to Enlarge
Windows 95 successfully running on my Galaxy Note 10.1 with mouse and keyboard support

Also, I was able to get my Logitech keyboard/mouse combo to work through the 30-pin charging port, and while dragging the cursor across the screen and “clicking” by tap was interesting, the keyboard is the way to go. It’s just too cumbersome for daily use otherwise.

So there it is, an Android tablet booting Windows 95/98! You can supposedly do this with Windows 2000 or XP, but I have not tried. If you have let me know, because I’d be interested in how you got native NTFS to work.

*There’s no instruction here because it really depends on your CD-ROM as to how you’d go about this. You’ll have to find one that will work with Win95 and DOS. I had one in the machine already so it was just a matter of setting it up manually through DOS.

-CJ Julius

Setting Up a Raspberry Pi with Ubuntu

I had been putting off posting about this project until I had gotten RaspBMC to work, as that was step two, but it looks like the problem I need to be resolved is going to be a little while coming. So, I’m going to come back later and put an update if I get it running correctly. Either way, the Raspbian (the Debian Wheezy Raspberry Pi distro) setup is pretty clear and the same for every model of Raspberry Pi.

Here is the hardware that I’m working with:

  • Raspberry Pi Model B
  • Logitech USB Wireless Mouse Keyboard combo
  • 4GB SDHC Class 10 Memory Card
  • Edimax USB wireless adaptor
  • 4GB USB stick (for extra storage)
  • Gearhead Passive USB hub
  • USB 1.0A power adapter and Micro USB cable
Raspberry Pi Model B with SD card and wireless adapter inserted.
Raspberry Pi Model B with SD card and wireless adapter inserted.

I did this all in Ubuntu 12.04, so my work will be related to that OS; though commands are pretty similar across many distributions. Also, I have an SD card slot in my laptop, which means I did not need an adaptor to access the card directly.

The first step is to get the image on the card. I snapped in the card, it mounted and I went to the disk utility to find out where it had put it (in the system). It was mounted at /dev/mmclbk0. Once I knew that, I was ready to go get the Raspbian OS.

You can get the latest image off of Raspberrypi.org’s downloads page. I’d recommend the straight Raspberry Pi Wheezy image, as the “soft float” one is slow, and the others are more for advanced users that want to do very specific things.

Raspberry Pi booting for the first time
Raspberry Pi booting for the first time

In any case, once I had it downloaded I checked the SHA1 sum, because we’d hate to have a corrupted image from the word go. If you’re unfamiliar with SHA1, then it’s simply a method of verifying file integrity. Quite basically, an algorithm generates a unique number for a file and then that number can be checked against a copy of a file to make sure that it’s in good condition. In terminal, and in the folder that I downloaded the file into you put the command:

sha1sum 2013-02-09-wheezy-raspbian.zip

And you’ll get an output that looks something like the string listed on the downloads page. In my case, I was looking for the following: b4375dc9d140e6e48e0406f96dead3601fac6c81

Then, I just opened the archive and drag/dropped the file into a folder I had created previously, and returned to terminal. We’re going to be using the dd command to copy the extracted image (input file) to the card (output file). We’ll set the byte size to 4M and need be superuser to do this. My command was:

sudo dd bs=4M if=2013-02-09-wheezy-raspbian.img of=/dev/mmcblk0

Raspberry Pi Wheezy default Desktop
Raspberry Pi Wheezy default Desktop

Once it was done, I unmounted my card and slapped it in my Raspberry Pi for boot. On first boot you’ll get a lot of options. I’m not going to go through them one by one, as it’s pretty clear what each one is. The two I want to point you to however, are the expand rootfs and the memory split.

Expand rootfs is necessary if you have, like me, a larger than 2GB SD card. This opens up the rest of your card to be used by the system, so you have more storage space for your OS.

The memory split is important because the Raspberry Pi has a unified memory structure, meaning that it has one unified “bank” of memory that it divides towards certain tasks. If you’re going to be doing processor-heavy tasks like number crunching or multiple cron jobs, then you might want to push this towards the system memory side. However, if you intend to be using a lot of the graphical features, then you might want to lean towards the GPU.

My Raspberry Pi as it I use it now.
My Raspberry Pi as it I use it now.

The system is installed and ready to go. If you hit a command-line on boot, use startx to start the X Windows system (the GUI), and that’s it. I spent a good few hours customizing it, changing the wallpaper and such, but also removing and adding some software from the system to make it more useful to me, but that’s the basic setup.

I’ll come back at a later date if I get RaspBMC working, but as of right now it forgets that I have a mouse and keyboard attached to it, and there isn’t a simple solution that works so far. Everything works in Raspbian, and I’ve got quite a few things that I want to do in that, including Python that I mentioned in a previous post.

-CJ Julius

Using Python 3 on Ubuntu 12.04

Python on Linux
Python on Linux

Recently, I’ve turned my attention to Python, the programming language. I had some work with it in the past, but never really gotten that far. As a hobby, it was time consuming and other things got in the way. Now that I’ve freed up a small chunk of time every week I’ve decided to devote that to working on learning the new Python 3, since 2.x is going away eventually.

I quickly found out that Python 3 is not directly supported on my platform of choice: Ubuntu 12.04 LTS. So, I needed to get this running from scratch, which involves downloading, compiling and making it easy to get to for working in.

Compiling and Installing

If you haven’t done so already, you’ll need to get a C compiler for Ubuntu. In general, it’s good to keep this resident on your machine anyway, since you don’t always know when you’ll need it and it doesn’t take up a whole lot of space.

sudo apt-get install build-essential

Then, we’ll need to get our Python installer from the web. I’m currently pointing towards the 3.3.1 version, but there will always be newer versions on the horizon, so check the download page.

wget http://www.python.org/ftp/python/3.3.1/Python-3.3.1.tar.bz2

This will download and the bzip tarball of the source code from the python website. Then, we need to un-ball it and change to the newly created directory.

tar jxf ./Python-3.3.1.tar.bz2
cd ./Python-3.3.1

Lastly, we’ll configure the source code, tell it where to install and then point our compiler (the first thing we did) at Python and tell it to put it all together.

./configure --prefix=/opt/python3.3
make && sudo make install

And now the basic Python core is ready to go. You can test it by putting the following in the command line.

/opt/python3.3/bin/python3

You should get the following output, or something quite similar:

Python 3.3.1 (default, May 12 2013, 22:10:01)
[GCC 4.6.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>

Getting Fancy

A command line-type of person may want to create a symbolic link that will let them have a sort of “python command”. Keep in mind that in the following code, you can substitute the “/bin/python” for anything you want the command to be (ie. “/bin/py” or “/bin/pthn” which will make the command py or pthn respectively).

mkdir ~/bin
ln -s /opt/python3.3/bin/python ~/bin/python

Alternatively, you may want to install a virtual environment for testing or whatnot. To do this and activate it, use this in the command line.

/opt/python3.3/bin/pyvenv ~/py33
source ~/py33/bin/activate

Integrated Development Environments

If you’re anything like me, then coding directly from gedit or the like is cumbersome and not really all that fun. I like options, a GUI and all the bells and whistles, so I went looking for a an IDE.

KomodoEdit install is as simple as downloading it and running the install.sh
KomodoEdit’s install is as simple as downloading it and running the install.sh

Netbeans was the first choice, as I’d used that before for PHP work. Here, I wanted something more dedicated to Python. If you do decide to go this route, make sure that you get the one from the Netbeans website and install it yourself. The version in the Ubuntu Software Center is terribly out of date and judging from the reviews, fatally flawed.

My second choice was KomodoEdit, the stripped down version of the Komodo IDE which I’ve heard some good things about (but never used). You can get it for both x86 and x64 as an AS package from their website.

If you have another IDE that you like better, let me know and I’ll take a look at it. I’m always on the hunt for a better/easier way to code.

-CJ Julius