Ride Snowboards & Drive Thru Records Promo CD

Ride CD - Cover

When I was younger (15, 2001) I went to a Ride Snowboards demo in western NY. While there I got a hold of a free demo CD of bands on the Drive Thru Records label. I loved that CD and listened to it over and over again for at least a year. After I moved to CA from NY, I lost this CD and have been looking for it ever since. Well, this past week, I found it on Ebay.

Apparently, it was a very limited release and was never sold anywhere. It was only given out for free at demos. Super glad I found it and for only $5. Hopefully someone else will be looking for it and read this.

Below is the track listing:

  1. The Benjamins – Sophia on the Stereo
  2. The Benjamins – Wonderful
  3. The Benjamins – Couch
  4. New Found Glory – Sincerely Me
  5. Home Grown – Give It Up
  6. Starting Line – Cheek to Cheek
  7. Starting Line – Leaving
  8. Midtown – Let Go
  9. Midtown – No Place Feels Like Home
  10. RX Bandits – Anyone But You
  11. Allister – Stuck
  12. The Movie Life – 10 Seconds Too Late
  13. The Movie Life – This Time Next Year
  14. H2O – Out of Debt

Ride CD - Back

Configuring pfSense for Wii U Online Play

Recently, I purchased Splatoon for my Wii U. The game is pretty fun for local battles and the single player campaign, but I really bought it for online play. However, I could not get a single online match to work. It would sit and search for a game to play and then pop up with error code 118-0516. Then I tried Mario Kart 8 and Super Smash Brothers and realized I had the same issues with them as well (I never play them online, so I didn’t know.) A few minutes on Google brought up numerous results with people having the exact same issue. So after reading through Nintendo’s connection troubleshooting guide, I decided to start playing around with my router to see if I could get it to work. Here’s what I found to make it work:

  • Give your Wii U a static LAN IP address (Nintendo’s Guide)
  • pfSense needs to be configured to manual NAT mode. (Firewall -> NAT -> Outbound, select “Manual Outbound NAT rule generation”)
  • Add a new outbound NAT mapping with the following settings:


  • Add a new NAT port forwarding rule (Firewall -> NAT -> Port Forwarding) with the following settings:

Wii U NAT Port Forwarding

  • Enable UPnP. (Services -> UPnP & NAT-PMP)

I’m not sure if UPnP is necessary, but I enabled it anyway because Nintendo recommended it. Once these settings are all configured, restart your game and the matchmaking should start working.

Hopefully this helps someone else in the future!

Pepperoni Pizza Roll-ups


This is a simple recipe that my family loves. Its very quick and easy to make. Makes 16 roll-ups which was enough to feed my family of five. Prep time is about 10 minutes and cook time is about 10 minutes.


  • Crescent Rolls (2 cans of 8)
  • Sliced Pepperoni
  • String Cheese (mozzarella, eight)
  • Parmesan
  • Garlic Powder
  • Marinara Sauce (1 jar)


  1. Preheat the oven to whatever temperature is on the cans of crescent rolls. (in my case, 375)
  2. Cut the string cheeses’ in half.
  3. Place 3-5 pepperoni slices on the wide end of a flattened crescent roll. Then place one of the half-string cheeses’ on top. Sprinkle with parmesan and roll it up.
  4. Place evenly apart on a foil-lined cookie sheet. Sprinkle with garlic powder and parmesan.
  5. Bake until roll-ups are golden brown on top. (usually 9-11 minutes)
  6. Serve with warm marinara sauce for dipping.

Internal Network Device Resolving to

Today I had a client call and tell me that, all at the same time, all of his Windows XP workstations wouldn’t load their dental software. Now this is weird, because his Windows 7 machines still worked just fine. This is a new client that we took over, so its not setup to our standards. Its a small office with one server and twelve workstations. Nothing too fancy. Server name is SERVER and domain is DENTAL.LOCAL

My initial thought was networking equipment failure. All of his XP workstations are right next to each other so I figured they might be on their own switch. However, they all still had internet. My next thought was maybe a software update broke the installation on XP only. But they hadn’t done an update in a few months, so that couldn’t be it either. So I started digging…

All of the mapped drives to the server were disconnected. When I tried to re-connect them, they gave me authentication errors, even with correct credentials. After a reboot, still no network drives. So then I thought, maybe they can’t talk to the domain controller anymore. So I tried a ping. This is where the fun starts.

Pinging the server by name got me a response from WHAT?!

Nslookup was giving me a non-authoritative response of the same address from the server I’m trying to look-up by name. This doesn’t make any sense.

Checked local IP settings. Correct.

Flushed DNS cache. No dice.

Checked pinging and nslookup from a Win 7 workstation. Perfect responses.

A quick google search of told me that ICANN uses that to let sysadmins know when they have a name resolution conflict. The only problem with that is, I didn’t have a conflict. DNS server on the server was setup perfectly. SOA, forwarders, and DNS entries were all correct. No errors in event log.

ICANN couldn’t have reserved .local, could they?

That’s when I had an epiphany. Even though .LOCAL wasn’t a gTLD, .DENTAL was. The XP machine were chopping off the .LOCAL from the end of the FQDN so they were trying to resolve SERVER.DENTAL.

I manually added the SERVER entry to the hosts file and they immediately started working again.

Guess its time to upgrade Doc.

FireBird Performance Tweaking

Last week, I posted a story about my interaction with a software developer who uses FireBird (FB) as a back-end for their software. The short version is that they left FB’s settings at default and were using a really, REALLY old version of FB. This caused the application to have massive (18 minute) load times and hang the entire server, with just one client connected. Below are the changes I made to increase overall performance of the application and server. These changes worked well for the software DOX by KSB Dental, but could potentially be used elsewhere. I will attempt to explain each option the best I can, but please keep in mind, I am not a FB database expert. I also edited the PATH variable to point to the FB install\bin folder (C:\Program Files\FireBird\bin, in my case) to make running the commands easier.

32bit VS 64bit

Initially, they were running FB (x86) 2.1.3 on Server 2003 (x86). We upgraded the server’s OS to Server 2008 R2 (x64) but left FB at 2.1.3 (x86). This made no difference in speed after the first week or so. Initially it did, but my guess is that the process of backing up and restoring the database re-indexed the data.

Another issue with 32bit is that there are limits within the OS and FB. Specifically, the fbserver process cannot use more than 2GB of memory and the maximum page buffers that can be used by a database is roughly 130k.

We then upgraded FB to 2.5.2 (x64). Just that switch alone doubled the speed of the application and we could have all 35+ clients connected at the same time.

The switch is pretty straight forward. Just use gback to backup the database, uninstall the old version, install the new version, and then restore using gback again.

One final thing to note is that until version 2.1.5, FB (x86) had a known bug with running on 64bit systems. Essentially, it would keep its own cache and use file system cache. Which means that everything cached was being cached twice. It would also continue using memory until the sever started swapping. It would do this without a single process on the server using more than 100MB of memory in the task manager.

Database Page Size

The default page size is 4KB. You can check to see what your database is using by running the gstat command.

gstat -h <path to DB>

We found that increasing our page size to 16KB increased performance a lot. (Almost double, again) In order to do this, the database needs to have a backup/restore done on it. When restoring, specify the -p switch and set it to 16384.

Classic Server VS SuperServer

There are two main differences between these options. The first difference is that in SuperServer mode, every client that connects to the FB server shares the same cache. For our application, this was a must as almost all of the clients were referencing the same data continuously. The second difference is that in Classic Server mode, every client that connects gets its own fbserver process. Our application sometimes opened 3-4 connections per client so we had roughly 180 fbserver processes running at a time.

Choosing which mode to run in is done during the FB installation.

One last thing to note is that in SuperServer mode, by default, the server will only utilize one processor. This can be changed in the firebird.conf file in the installation directory. The variable to set is CPUAffinity. We use four cores, so we set ours to 15. The following is a small guide on how to set this:

Each core is assigned a bit map value. For example core one is 1, core two is 2, core three is 4, and core four is 8. This keeps doubling for each core you have. (Hint: Binary) If we wanted to use just the last two cores, we would set it to 12. The first two? 3. Cores two and three? 6.

Page buffers

We decided to start by using 600k page buffers. This setting basically tells FB how much cache to use. The size of the cache can be determined by multiplying the page size by the page buffers. For example, our maximum cache size would be 9.2GB. We have since increased it to 800k. To set this, just use the gfix command and reboot the FB server:

gfix -buffers <# of page buffer> -user <DB username> -password <DB password> <Path to DB>
ex: gfix -buffers 600000 -user dbadmin -password secretpassword D:\dbname.gdb

cache types and limits

We opted to use file system caching instead of FB’s built-in caching. (Which we may end up changing later, we’re still undecided) To do this, we need to edit two variables within the firbird.conf file. The first is FileSystemCacheSize which is an integer defining what percentage of the system memory FB can use. We set this at 90% as FB is the only thing running on our server.

Next is FileSystemCacheThreshold. To set FB to always use file system cache, make this value way higher than your page buffers.

Both of these changes require a full server reboot, not just FB.

Write Caching

If you’ve ever done any RAID controller configuring, you may be familiar with Write Through and Write Back (Synchronous and Asynchronous) cache modes. By default, FB uses synchronous caching which means that the data gets written to the DB immediately. This is the best option for data validation in case of catastrophic hardware failure.

If you’re looking to increase speeds and have redundancy at the hardware level (RAID, iSCSI, clustering, battery backup, etc) you can use asynchronous caching which writes the changes to cache to be written to the DB when enough of these changes have been cached. We found this increased our speed quite a bit. To do this, simply use the gfix command and restart the FB server:

gfix -w async -user <DB username> -password <DB password> <Path to DB>
ex: gfix -w async -user dbadmin -password secretpassword D:\dbname.gdb


This is a tricky one. To understand sweeping and the reasons behind it, you have to understand how FB’s transactions work. I’ll try to be brief.

Every time the application interacts with FB (reads or writes) it creates a transaction. These transactions are categorized in two ways (that matter to us): interesting and commited. Once a transaction is started, it becomes interesting to FB. FB then marks it to be committed to the DB. until the application sends the commit command, the DB keeps a copy of the old data and the updated data, in-case it needs to roll back. What this does it keep a lot of extra data in the DB unless its being committed correctly.

FB has a built-in function called sweeping to collect the garbage “left open” changes that never got committed and clean them up. They call this garbage collection. By default, FB does it every 20k transactions. (The difference between the oldest snapshot and the oldest interesting transaction) Sweeping uses a good amount of CPU and memory and can bog the system down. We opted to remove the automatic sweeping and run manual sweeps during specific down times. (After-hours/Lunch) To do this, you’ll need to set the database to stop sweeping automatically with gfix and then reboot the FB server.

gfix -h <interval> -user <DB username> -password <DB password> <Path to DB>
ex: gfix -h 0 -user dbadmin -password secretpassword D:\dbname.gdb

Then you’ll need to setup a scheduled task to run the command to initiate a sweep.

gfix -s -user <DB username> -password <DB password> <Path to DB>
ex: gfix -s -user dbadmin -password secretpassword D:\dbname.gdb


After we made these tweaks to the FB database, we noticed an exponential difference in speed. We went from 18 minutes to open the application and very slow speeds within the application to 40 seconds to open and almost instant moving around within the application. Keep in mind that not ALL of these tweaks may be best for your environment. Try and decide which options would be best for you. If you have any questions, feel free to comment below or shoot me an email and I’ll do my best to point you in the right direction.

Incompetent FireBird Developer Woes

A client of ours (45 workstations) has practice management software that uses FireBird SQL Server as its back-end database. It stores all of their patient, billing, claims, scheduling, forms, and procedure data. Typical usage of the application by the front desk staff is checking in/out, scheduling, and collecting money from patients. They all pretty much reference the same data all day long. Typical usage in the back procedure area is pulling up a patient’s x-rays, history, and entering new procedure data. A couple of years ago, they started to notice their software was slowing down more and more.

Some History

They initially had two servers. One was the domain controller that also housed patient x-rays, practice documents, and their orthodontic software. The other ran their practice management software database (FireBird). We provided the domain controller and the other server came from their software provider.

Two years ago, we upgraded their old domain controller to a setup of two Ubuntu servers that use DRBD to replicate a partition holding QEMU/KVM virtual machine images from the primary to the secondary. This allows us to quickly restore services in the event of a catastrophic hardware failure on the primary server by simply setting the secondary as the primary and turning on the VMs. (I’ll do a how-to on this setup soon) When we initially engineered this project, we didn’t intend to virtualize their database server, but left plenty of overhead.

The Trouble Starts

After about six months of running great on the new domain controller, the office starts to notice slowness in their practice management software. The drives in the older database server were failing and the database was getting larger. We decided to offer to virtualize the old database server (Server 2003) and put it on our servers as we knew we had the overhead to support it. I allocated four cores and 4GB of memory to the server and it was running great for a few more months.

They started seeing more slowness as their database grew to over 30GB. After monitoring the resources on the newly virtualized server, I noticed all of the memory was being consumed, but no processes were reporting using more than 100MB. I called their software vendor and they explained that we were using an unsupported, non-standard platform and that had to be the cause. So I took some time and revioewed my QEMU/KVM settings and made sure I wasn’t missing anything performance-wise. Another call to the software company and they explained that sometimes the firebird processes don’t correctly show the amount of memory they’re using.

Something Isn’t right here…

The slowness continued to get worse and I was rebooting the virtual machine every lunchtime and night as the only option. The software company continued to blame our host machines and wouldn’t help diagnose the issue, even though the client had support with them. In an attempt to speed things up, I decided to create a new Server 2008 R2 virtual machine and migrate the database to it. I allocated six cores and 16GB of memory to the new VM. The software company did the database migration after I setup the server for them.

Initially, the performance was much better. After a few months though, it started slowing and I noticed the server was out of memory, again. I also noticed that there were roughly 160 firebird server processes running during the day. The software company explained that firebird creates a process for every connection to it that houses its own cache for each connection as well. I thought that seemed a bit inefficient, but figured since they’re the developers, they know best.

That’s where I went wrong

I started doing some research into FireBird and how it works and why it acts like it does. I started getting more familiar with the command line tools to monitor it and checked the version. As of two weeks ago, they were running 2.1.3 x86. I immediately started questioning the software company about why they were using a really old version. They said that’s the version they install on ALL of their servers. So I started digging deeper and noticed that in that version of FireBird, there is a known bug with it and 64bit OS’s. Specifially, how the system manages FireBird’s cache. I brought this to the attention of the software company and they said they couldn’t update the version of FireBird or it would break the software. So back to rebooting twice a day while I researched more options.

I started to read techniques for optimizing FireBird and realized that the software company had left all of FireBird’s options at default values. I approached them again and they said they had no idea what I was talking about and that if I changed settings, things would break. The continued to blame our hardware and started proposing a new server to the client. They also claimed that my client was the only one having these issues.

Not on my watch!

I decided it was time to start testing some of these optimization changes after taking snapshots of the VM. I started messing with the page buffers and the write-cache mode. Magically, the software still worked and was slightly faster. It still used all of the memory though. I decided to contact a buddy of mine who develops an application using FireBird as its back-end as well; and he gave me some pointers on page buffers and other settings.

During this time, the client got fed up with the software company and demanded a list of their clients with similar sizes so they could call and ask them about their performance. After some calling around, they figured out that they were not the only client with this issue and an even larger office had “figured out how to fix it”. They gave me the contact information to that office and I had a long, interesting talk with their IT administrator. Turns out, they fixed the problem by purchasing a MASSIVE enterprise class server. $15k later, the software would run fast and stable. One important note is that they had 120GB of memory, and 70GB of it was being used by FireBird. Their entire database was residing in the memory, which is why it “solved their problem”.

avoiding spending $15k on a software problem

Finally, I got fed up with waiting for a solution and decided on a list of changes we were going to make, regardless of what the software developer said. First thing we did was upgrade FireBird to current x64. We then switched to the superserver mode which uses one FireBird process and shares its cache with all connections to it. With this, we had to configure the FireBird process to use more than one CPU, which was an easy option in the firebird.conf. Next, we changed the database page size to 16KB, increased the maximum page buffers to 650,000, and told the database to use asynchronous write-caching . Then we changed the option for FireBird to use file-system cache or FireBird’s cache. We opted for file-system cache only. In some cases, FireBird will do both which essentially doubles its memory usage for no reason. Finally, we set the FireBird database to sweep its old transactions every 100,000 transactions instead of every 20,000. Sweeping eats CPU resources so we wanted to do it less frequently. They do 100k transactions in about an hour.

Today was the first day after making all of these changes and they are faster than they ever have been. Their software went from taking 16 minutes to open and be ready to use to 40 seconds. FireBird is using only 6GB of memory and about 20% CPU. (I decided to start small on the memory and increase it as we tested more) They have run all day without a single failure or slow-down. In the next couple of days, I’ll do a quick write-up of exactly how we made the changes we did and how to check the current settings. Here is the write-up on how we made our changes and how to check the current settings.


Software company setup their database using defaults and had no idea how to optimize it. Instead of taking them at their word, we did research and made changes on our own which improved speeds exponentially.

Controling Terminal Server Shutdowns/Reboots


Here is a quick and easy way to control who can/can’t shutdown or reboot your terminal server (TS). You’d be surprised how many times I’ve seen this enabled for all users when we take over an established client’s infrastructure. This quick guide will show you how to enable shutdown/reboot of a TS by a specific group of users and disable it for all other users. There are other ways to do this, but I prefer this method because its quick and easy, especially if you only have one or two TS’s. Keep in mind, this also applies to users logging in locally, not just through RDP.

First we need to create a global security group in our active directory. In my case, I call it “Shutdown TS”. Now add whichever users you would like to be able to shutdown the TS to the newly created group.

Next, we’ll need to open the local security policy on the TS. (See image below)

Then we’ll navigate to: Computer Config -> Windows Settings -> Security Settings -> Local Policies -> User Rights Assignment. Find the policy called “Shut down the system” (See image below)

Find Policy

Now we need to double-click the policy to edit it. Remove any groups you don’t want to have and add the newly created group. (See image below)

Add Group


Finally, we just need to force the group policy to update. Run “gpupdate /force”

Below are images from two different users’ start menu. Administrator is in the group and temp is not.

Sweet Chili Pineapple Smoked Sausage

Sweet Chili Pineapple Smoked Sausage
Sweet Chili Pineapple Smoked Sausage

I made this tonight for the family and they absolutely loved it. It made enough to feed three children and two adults. Prep time was about 20 minutes and it cost about $10. Let me know what you think and how you made it better!


  • Cooked Rice – 3 cups
  • Smoked Sausage
  • Pineapple Chunks – 1 cup
  • Red Bell Pepper – 1
  • Sweet Chili Sauce – 1/3 cup


  1. Cut red bell pepper and smoked sausage into 1/2″ pieces. Add to large skillet and cook over medium high heat until sausage is browned and bell pepper is tender.
  2. Add pineapple and sweet chili sauce and cook for 5 more minutes.
  3. Pour over rice and enjoy!

Installing a Network Printer via Command Line

Recently I had to install a printer on a lot of workstations in a short amount of time. I came up with this script to make things a lot faster. Below the script breakdown is a link to the script to copy/paste. I deployed it via our remote management software (GFI Max), but it could be used with PsExec, group policy, or a logon script as well.

All of the VBS files referenced are located at “C:\Windows\system32\Printing_Admin_Scripts\” on all Win7 and XP machines.

Script Breakdown
Script Breakdown

Here is a link to the script in plaintext.

@echo off
REM This line removes the port if its already added
cscript "\\master-server\data\test\scripts\Prnport.vbs" -d -r IP_10.0.0.205

REM This line removes the printer if its already added
cscript "\\master-server\data\test\scripts\Prnmngr.vbs" -d -p "Brother Color Laser Printer"

REM This line adds the port
cscript "\\master-server\data\test\scripts\Prnport.vbs" -a -r IP_10.0.0.205 -h -o raw -n 9100

REM This line adds the printer driver
cscript "\\master-server\data\test\scripts\Prndrvr.vbs" -a -m "Brother MFC-9970CDW Printer" -i \\master-server\data\test\drivers\32\brpoc10a.inf -h \\master-server\data\test\drivers\32

REM this line adds the printer and specifies which driver/port to use
cscript "\\master-server\data\test\scripts\Prnmngr.vbs" -a -p "Brother Color Laser Printer" -m "Brother MFC-9970CDW Printer" -r IP_10.0.0.205

Fun at the Denver Zoo

Maggie and Daddy

Maggie and Daddy